Pub Date : 2025-04-08DOI: 10.1016/j.cma.2025.117995
Shanyao Deng , Weibin Wen , Pan Wang , Shengyu Duan , Jun Liang
This paper introduces a novel multi-resolution topology optimization method that combines the parametric level set method (PLSM) and quasi-smooth manifold element (QSME) [1]. The QSME has high accuracy and high-order continuity, and its degrees of freedoms have clear physical meanings. By employing the QSME for structural analysis on a coarser analysis mesh and PLSM for updating design variables on a finer design mesh, the proposed QSME-MPLSM can obtain clear and smooth optimized structures with high computational efficiency and reliable structural performance. By integrating the features of QSME and PLSM, this paper proposes an element subdivision technique (EST). The EST can accurately capture the integration domain of element and avoids the need for mesh refinement or additional element node. This paper presents a detailed formulation of the QSME-MPLSM for minimum compliance topology optimization problems, including sensitivity analysis, a design mesh generation method, and an EST-based element stiffness matrix update method. Representative 2D and 3D numerical examples are presented to validate effectiveness of the QSME-MPLSM. The results demonstrate that this method can enhance both the efficiency and accuracy of topology optimization, and obtain reliable optimized results.
{"title":"A multi-resolution parameterized level set method based on quasi-smooth manifold element","authors":"Shanyao Deng , Weibin Wen , Pan Wang , Shengyu Duan , Jun Liang","doi":"10.1016/j.cma.2025.117995","DOIUrl":"10.1016/j.cma.2025.117995","url":null,"abstract":"<div><div>This paper introduces a novel multi-resolution topology optimization method that combines the parametric level set method (PLSM) and quasi-smooth manifold element (QSME) [<span><span>1</span></span>]. The QSME has high accuracy and high-order continuity, and its degrees of freedoms have clear physical meanings. By employing the QSME for structural analysis on a coarser analysis mesh and PLSM for updating design variables on a finer design mesh, the proposed QSME-MPLSM can obtain clear and smooth optimized structures with high computational efficiency and reliable structural performance. By integrating the features of QSME and PLSM, this paper proposes an element subdivision technique (EST). The EST can accurately capture the integration domain of element and avoids the need for mesh refinement or additional element node. This paper presents a detailed formulation of the QSME-MPLSM for minimum compliance topology optimization problems, including sensitivity analysis, a design mesh generation method, and an EST-based element stiffness matrix update method. Representative 2D and 3D numerical examples are presented to validate effectiveness of the QSME-MPLSM. The results demonstrate that this method can enhance both the efficiency and accuracy of topology optimization, and obtain reliable optimized results.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"441 ","pages":"Article 117995"},"PeriodicalIF":6.9,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-08DOI: 10.1016/j.cma.2025.117961
Yifei Sun , Jingrun Chen
Solving partial differential equations (PDEs) is widely used in scientific and engineering applications. Challenging scenarios include problems with complicated solutions and/or over complex domains. To solve these issues, tailored approximation space and specially designed meshes are introduced in traditional numerical methods, both of which require significant human efforts and computational costs. Recent developments in machine learning-based methods, especially the random feature method (RFM), remove the usage of meshes and thus can be easily applied to problems over complex domains. However, as the solution and/or geometry complexity increases, a significant number of collocation points and random feature functions are needed, which results in a large-scale linear problem that is difficult to solve. In this work, by combining the idea of domain decomposition and RFM, we propose two-level RFMs to solve elliptic PDEs. First, complex domains are decomposed by dividing their bounding box along the coordinate planes, and the resulting smaller problems over subdomains are solved using local random features. Only the output-layer weights of the neural network, which serve as a compressed representation of the complicated solution, are communicated between adjacent subdomains, making this step highly parallelizable. Second, a one-time QR decomposition is applied for local problems at the fine level and one global problem at the coarse level and is reused repeatedly in the iterative process. Moderate numbers of iterations are needed to achieve a global convergence. Therefore, our method reduces the computational cost significantly without sacrificing accuracy. Three-dimensional elliptic problems with complicated solutions and/or over complex domains, including the Poisson equation, multiscale elliptic equation, and elasticity problems, are used to demonstrate the efficiency and robustness of our method. For the same accuracy requirement, our method solves these problems within a timescale of 100 s, while traditional methods typically take longer for the whole process or cannot even get a solution due to the difficulty of generating a mesh.
{"title":"Two-level random feature methods for elliptic partial differential equations over complex domains","authors":"Yifei Sun , Jingrun Chen","doi":"10.1016/j.cma.2025.117961","DOIUrl":"10.1016/j.cma.2025.117961","url":null,"abstract":"<div><div>Solving partial differential equations (PDEs) is widely used in scientific and engineering applications. Challenging scenarios include problems with complicated solutions and/or over complex domains. To solve these issues, tailored approximation space and specially designed meshes are introduced in traditional numerical methods, both of which require significant human efforts and computational costs. Recent developments in machine learning-based methods, especially the random feature method (RFM), remove the usage of meshes and thus can be easily applied to problems over complex domains. However, as the solution and/or geometry complexity increases, a significant number of collocation points and random feature functions are needed, which results in a large-scale linear problem that is difficult to solve. In this work, by combining the idea of domain decomposition and RFM, we propose two-level RFMs to solve elliptic PDEs. First, complex domains are decomposed by dividing their bounding box along the coordinate planes, and the resulting smaller problems over subdomains are solved using local random features. Only the output-layer weights of the neural network, which serve as a compressed representation of the complicated solution, are communicated between adjacent subdomains, making this step highly parallelizable. Second, a one-time QR decomposition is applied for local problems at the fine level and one global problem at the coarse level and is reused repeatedly in the iterative process. Moderate numbers of iterations are needed to achieve a global convergence. Therefore, our method reduces the computational cost significantly without sacrificing accuracy. Three-dimensional elliptic problems with complicated solutions and/or over complex domains, including the Poisson equation, multiscale elliptic equation, and elasticity problems, are used to demonstrate the efficiency and robustness of our method. For the same accuracy requirement, our method solves these problems within a timescale of 100 s, while traditional methods typically take longer for the whole process or cannot even get a solution due to the difficulty of generating a mesh.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"441 ","pages":"Article 117961"},"PeriodicalIF":6.9,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-08DOI: 10.1016/j.cma.2025.117980
Bingbing Chen , Dongfeng Li , Liyuan Wang , Xiangyun Ge , Chenfeng Li
Data-driven digital reconstruction is a power tool for building digital microstructures for such heterogeneous materials as porous media and composites. It uses scanned images as reference and generates digital microstructures through optimisation procedures or computer vision methods. However, data-driven digital reconstruction methods do not apply to polycrystalline microstructures because their raw measurement data (lattice orientation, grain structure, and phase distribution) do not naturally correspond to RGB images. It faces challenges such as discontinuities and ambiguities in orientation colouring, as well as a lack of algorithms for extracting orientation data from RGB images. This paper introduces a novel data-driven digital reconstruction method for polycrystalline microstructures. The method includes experimental acquisition of microstructural data (such as phase map, lattice symmetry, and lattice orientation), conversion of experimental data to RGB image formats for continuous and symmetry-conserved visualisation, image generation from continuous and symmetry-conserved orientation colouring, and reconstruction of grain data from synthesised RGB images. The results demonstrate that this method enables efficient microstructure reconstructions with high fidelity to actual microstructural characteristics, addressing the limitations of traditional methods. Furthermore, by offering realistic digital microstructure models, this novel data-driven reconstruction scheme can be readily integrated with simulation tools to improve the study of structure–property linkages in polycrystalline materials.
{"title":"A novel data-driven digital reconstruction method for polycrystalline microstructures","authors":"Bingbing Chen , Dongfeng Li , Liyuan Wang , Xiangyun Ge , Chenfeng Li","doi":"10.1016/j.cma.2025.117980","DOIUrl":"10.1016/j.cma.2025.117980","url":null,"abstract":"<div><div>Data-driven digital reconstruction is a power tool for building digital microstructures for such heterogeneous materials as porous media and composites. It uses scanned images as reference and generates digital microstructures through optimisation procedures or computer vision methods. However, data-driven digital reconstruction methods do not apply to polycrystalline microstructures because their raw measurement data (lattice orientation, grain structure, and phase distribution) do not naturally correspond to RGB images. It faces challenges such as discontinuities and ambiguities in orientation colouring, as well as a lack of algorithms for extracting orientation data from RGB images. This paper introduces a novel data-driven digital reconstruction method for polycrystalline microstructures. The method includes experimental acquisition of microstructural data (such as phase map, lattice symmetry, and lattice orientation), conversion of experimental data to RGB image formats for continuous and symmetry-conserved visualisation, image generation from continuous and symmetry-conserved orientation colouring, and reconstruction of grain data from synthesised RGB images. The results demonstrate that this method enables efficient microstructure reconstructions with high fidelity to actual microstructural characteristics, addressing the limitations of traditional methods. Furthermore, by offering realistic digital microstructure models, this novel data-driven reconstruction scheme can be readily integrated with simulation tools to improve the study of structure–property linkages in polycrystalline materials.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"441 ","pages":"Article 117980"},"PeriodicalIF":6.9,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliability-based topology optimization (RBTO) considering strength failure under load uncertainty can yield the optimum topological designs that significantly enhance the structural safety. By employing stress-based performance functions, this study formulates the problem as minimizing an objective function subject to a system reliability constraint. Load uncertainty is characterized by treating both the direction and magnitude of loads as independent random variables. To enhance the convergence speed of density-based topology optimization in achieving binary designs, we propose a hybrid method of solid isotropic material with penalization (SIMP) and sequential approximate integer programming (SAIP). Initially, the SIMP method is employed to determine an initial structural design with a stable path of force transmission. Subsequently, the SAIP incorporating with an intermediate density variation strategy is utilized to obtain a clear topology design. The direct probability integral method is suggested to accurately and efficiently estimate the failure probability of the series system and to calculate its sensitivity with respect to design variables. Several numerical examples demonstrate that the proposed method achieves distinct binary topology configurations in fewer iterations. Moreover, the optimized designs exhibit high sensitivity to variations in load conditions. By accounting for the load uncertainties in both magnitude and direction, the designs generated by RBTO are more suitable for practical applications.
{"title":"Hybrid method of SIMP and SAIP for reliability-based topology optimization considering strength failure under load uncertainty","authors":"Zhenzeng Lei, Zeyu Deng, Yuan Liang, Guohai Chen, Rui Li, Dixiong Yang","doi":"10.1016/j.cma.2025.117975","DOIUrl":"10.1016/j.cma.2025.117975","url":null,"abstract":"<div><div>Reliability-based topology optimization (RBTO) considering strength failure under load uncertainty can yield the optimum topological designs that significantly enhance the structural safety. By employing stress-based performance functions, this study formulates the problem as minimizing an objective function subject to a system reliability constraint. Load uncertainty is characterized by treating both the direction and magnitude of loads as independent random variables. To enhance the convergence speed of density-based topology optimization in achieving binary designs, we propose a hybrid method of solid isotropic material with penalization (SIMP) and sequential approximate integer programming (SAIP). Initially, the SIMP method is employed to determine an initial structural design with a stable path of force transmission. Subsequently, the SAIP incorporating with an intermediate density variation strategy is utilized to obtain a clear topology design. The direct probability integral method is suggested to accurately and efficiently estimate the failure probability of the series system and to calculate its sensitivity with respect to design variables. Several numerical examples demonstrate that the proposed method achieves distinct binary topology configurations in fewer iterations. Moreover, the optimized designs exhibit high sensitivity to variations in load conditions. By accounting for the load uncertainties in both magnitude and direction, the designs generated by RBTO are more suitable for practical applications.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"441 ","pages":"Article 117975"},"PeriodicalIF":6.9,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143768811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-03DOI: 10.1016/j.cma.2025.117951
Xing-ao Li , Dequan Zhang , Xinyu Jia , Xu Han , Guosong Ning , Qing Li
Accurate dynamic modeling is essential for implementing model-based control strategies to enhance the performance of industrial robots. However, conventional dynamic parameter identification methods suffer from several limitations, such as insufficient accuracy, lack of physical feasibility assurance, and inadequate utilization of prior information. More importantly, the existing methods fail to quantify the uncertainties in dynamic parameters and their effects on the performance effectively. To address these challenges, this study proposes a novel Bayesian learning framework for dynamic parameter identification and torque prediction in industrial robots. This framework integrates the inverse dynamic model (IDM) into Bayesian inference, leveraging its linear characteristics to derive an analytical representation of dynamic parameters that inherently accounts for uncertainty information. On this basis, the key factors influencing the mean and standard deviation of the dynamic parameters are analyzed. By extrapolating the uncertainty information obtained, the method generates reliable uncertainty bounds for robotic joint torques. Moreover, incorporating reasonable prior information enhances identification accuracy while ensuring the physical feasibility of dynamic parameters. To evaluate the effectiveness of the proposed approach, three industrial robot analysis examples are presented. The first two are used to demonstrate the feasibility and performances of the proposed method, while the third, an in-house experimental study on HSR-JR612 robot, further validates its accuracy in parameter identification and the uncertainty-bound prediction for the joint torques. These results underscore the engineering applicability of the proposed framework in industrial robotic systems.
{"title":"A Bayesian learning approach for dynamic parameter identification and its applications in industrial robotic systems","authors":"Xing-ao Li , Dequan Zhang , Xinyu Jia , Xu Han , Guosong Ning , Qing Li","doi":"10.1016/j.cma.2025.117951","DOIUrl":"10.1016/j.cma.2025.117951","url":null,"abstract":"<div><div>Accurate dynamic modeling is essential for implementing model-based control strategies to enhance the performance of industrial robots. However, conventional dynamic parameter identification methods suffer from several limitations, such as insufficient accuracy, lack of physical feasibility assurance, and inadequate utilization of prior information. More importantly, the existing methods fail to quantify the uncertainties in dynamic parameters and their effects on the performance effectively. To address these challenges, this study proposes a novel Bayesian learning framework for dynamic parameter identification and torque prediction in industrial robots. This framework integrates the inverse dynamic model (IDM) into Bayesian inference, leveraging its linear characteristics to derive an analytical representation of dynamic parameters that inherently accounts for uncertainty information. On this basis, the key factors influencing the mean and standard deviation of the dynamic parameters are analyzed. By extrapolating the uncertainty information obtained, the method generates reliable uncertainty bounds for robotic joint torques. Moreover, incorporating reasonable prior information enhances identification accuracy while ensuring the physical feasibility of dynamic parameters. To evaluate the effectiveness of the proposed approach, three industrial robot analysis examples are presented. The first two are used to demonstrate the feasibility and performances of the proposed method, while the third, an in-house experimental study on HSR-JR612 robot, further validates its accuracy in parameter identification and the uncertainty-bound prediction for the joint torques. These results underscore the engineering applicability of the proposed framework in industrial robotic systems.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"441 ","pages":"Article 117951"},"PeriodicalIF":6.9,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-03DOI: 10.1016/j.cma.2025.117959
Xiaowei Bai , Jie Yang , Qun Huang , Wei Huang , Huicui Li , Noureddine Damil , Heng Hu
This work aims to couple distance-minimizing data-driven computing with model-driven computing (standard constitutive model-based simulations), allowing for non-matching interfaces between computational regions meshed with both full and reduced finite elements. Specifically, data-driven (DD) computing is employed for regions where the material constitutive models are difficult to determine, whilst model-driven (MD) computing is applied to the remaining regions to take advantage of its computational efficiency. To connect non-matching interfaces, a penalty-based technique is utilized to ensure the continuity of displacements and the accurate transfer of interaction forces across these interfaces. In this manner, the proposed method becomes more versatile and practical for engineering applications, enabling separate modeling of data-driven and model-driven regions with different mesh refinements or types of elements (e.g., coarse and fine meshes, full and reduced finite elements) based on their specific needs. Several examples are provided to illustrate the effectiveness and robustness of the proposed method.
{"title":"Coupling of data-driven and model-driven computing within non-matching meshes","authors":"Xiaowei Bai , Jie Yang , Qun Huang , Wei Huang , Huicui Li , Noureddine Damil , Heng Hu","doi":"10.1016/j.cma.2025.117959","DOIUrl":"10.1016/j.cma.2025.117959","url":null,"abstract":"<div><div>This work aims to couple distance-minimizing data-driven computing with model-driven computing (standard constitutive model-based simulations), allowing for non-matching interfaces between computational regions meshed with both full and reduced finite elements. Specifically, data-driven (DD) computing is employed for regions where the material constitutive models are difficult to determine, whilst model-driven (MD) computing is applied to the remaining regions to take advantage of its computational efficiency. To connect non-matching interfaces, a penalty-based technique is utilized to ensure the continuity of displacements and the accurate transfer of interaction forces across these interfaces. In this manner, the proposed method becomes more versatile and practical for engineering applications, enabling separate modeling of data-driven and model-driven regions with different mesh refinements or types of elements (e.g., coarse and fine meshes, full and reduced finite elements) based on their specific needs. Several examples are provided to illustrate the effectiveness and robustness of the proposed method.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"441 ","pages":"Article 117959"},"PeriodicalIF":6.9,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-03DOI: 10.1016/j.cma.2025.117952
Wei Zhang , Zhonglai Wang , Haoyu Wang , Zhangwei Li , Yunfei Wang , Ziyi Zhao
The reliability analysis methods based on the surrogate model significantly reduce the number of true performance function calls. However, existing reliability analysis methods ignore Low-Fidelity (LF) information in the assessment of reliability analysis, which consequently leads to the difficulty in efficiently and accurately estimating the small failure probabilities with time-consuming High-Fidelity (HF) finite element simulation. To address this challenge, a novel reliability analysis named AEK-MFIS is presented in this paper, which aims at reducing the times of HF simulation calls while providing the accurate estimation result for small failure probabilities. The proposed AEK-MFIS comprises the following strategies: (1) based on the Kalman Filter (KF) and Multi-Fidelity (MF) Kriging model, a novel ensemble of Kriging (EK) models is introduced to fuse information from different fidelities; (2) to select the best points in a more accurate and efficient way, a novel active learning function named Global Error-based Active Learning Function (GEALF) is presented; (3) a new stopping criterion is constructed based on the EK prediction, which aims at avoiding the pre-mature or late-mature for evaluating the small failure probabilities. Six examples involving two numerical and four engineering examples are introduced to elaborate and validate the effectiveness of the proposed method for estimating the small failure probabilities.
{"title":"AEK-MFIS: An adaptive ensemble of Kriging models based on multi-fidelity simulations and importance sampling for small failure probabilities","authors":"Wei Zhang , Zhonglai Wang , Haoyu Wang , Zhangwei Li , Yunfei Wang , Ziyi Zhao","doi":"10.1016/j.cma.2025.117952","DOIUrl":"10.1016/j.cma.2025.117952","url":null,"abstract":"<div><div>The reliability analysis methods based on the surrogate model significantly reduce the number of true performance function calls. However, existing reliability analysis methods ignore Low-Fidelity (LF) information in the assessment of reliability analysis, which consequently leads to the difficulty in efficiently and accurately estimating the small failure probabilities with time-consuming High-Fidelity (HF) finite element simulation. To address this challenge, a novel reliability analysis named AEK-MFIS is presented in this paper, which aims at reducing the times of HF simulation calls while providing the accurate estimation result for small failure probabilities. The proposed AEK-MFIS comprises the following strategies: (1) based on the Kalman Filter (KF) and Multi-Fidelity (MF) Kriging model, a novel ensemble of Kriging (EK) models is introduced to fuse information from different fidelities; (2) to select the best points in a more accurate and efficient way, a novel active learning function named Global Error-based Active Learning Function (GEALF) is presented; (3) a new stopping criterion is constructed based on the EK prediction, which aims at avoiding the pre-mature or late-mature for evaluating the small failure probabilities. Six examples involving two numerical and four engineering examples are introduced to elaborate and validate the effectiveness of the proposed method for estimating the small failure probabilities.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"441 ","pages":"Article 117952"},"PeriodicalIF":6.9,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-02DOI: 10.1016/j.cma.2025.117921
Farnoosh Hadizadeh , Wrik Mallik , Rajeev K. Jaiman
This study presents a graph neural network (GNN)-based surrogate modeling approach for multi-objective fluid-acoustic shape optimization. The proposed GNN model transforms mesh-based simulations into a computational graph, enabling steady-state prediction of pressure and velocity fields around varying geometries subjected to different operating conditions. We employ signed distance functions to implicitly encode geometries on unstructured nodes represented by the graph neural network. By integrating these functions with computational mesh information into the GNN architecture, our approach effectively captures geometric variations and learns their influence on flow behavior. The trained graph neural network achieves high prediction accuracy for aerodynamic quantities, with median relative errors of 0.5%–1% for pressure and velocity fields across 200 test cases. The predicted flow field is utilized to extract fluid force coefficients and boundary layer velocity profiles, which are then integrated into an acoustic prediction model to estimate far-field noise. This enables the direct integration of the coupled fluid-acoustic analysis in the multi-objective shape optimization algorithm, where the airfoil geometry is optimized to simultaneously minimize trailing-edge noise and maximize aerodynamic performance. Results show that the optimized airfoil achieves a 13.9% reduction in overall sound pressure level (15.82 dBA) while increasing lift by 7.2% under fixed operating conditions. Optimization was also performed under a different set of operating conditions to assess the model’s robustness and demonstrate its effectiveness across varying flow conditions. In addition to its adaptability, our GNN-based surrogate model, integrated with the shape optimization algorithm, exhibits a computational speed-up of three orders of magnitude compared to full-order online optimization applications while maintaining high accuracy. This work demonstrates the potential of GNNs as an efficient data-driven approach for fluid-acoustic shape optimization via adaptive morphing of structures.
{"title":"A graph neural network surrogate model for multi-objective fluid-acoustic shape optimization","authors":"Farnoosh Hadizadeh , Wrik Mallik , Rajeev K. Jaiman","doi":"10.1016/j.cma.2025.117921","DOIUrl":"10.1016/j.cma.2025.117921","url":null,"abstract":"<div><div>This study presents a graph neural network (GNN)-based surrogate modeling approach for multi-objective fluid-acoustic shape optimization. The proposed GNN model transforms mesh-based simulations into a computational graph, enabling steady-state prediction of pressure and velocity fields around varying geometries subjected to different operating conditions. We employ signed distance functions to implicitly encode geometries on unstructured nodes represented by the graph neural network. By integrating these functions with computational mesh information into the GNN architecture, our approach effectively captures geometric variations and learns their influence on flow behavior. The trained graph neural network achieves high prediction accuracy for aerodynamic quantities, with median relative errors of 0.5%–1% for pressure and velocity fields across 200 test cases. The predicted flow field is utilized to extract fluid force coefficients and boundary layer velocity profiles, which are then integrated into an acoustic prediction model to estimate far-field noise. This enables the direct integration of the coupled fluid-acoustic analysis in the multi-objective shape optimization algorithm, where the airfoil geometry is optimized to simultaneously minimize trailing-edge noise and maximize aerodynamic performance. Results show that the optimized airfoil achieves a 13.9% reduction in overall sound pressure level (15.82 dBA) while increasing lift by 7.2% under fixed operating conditions. Optimization was also performed under a different set of operating conditions to assess the model’s robustness and demonstrate its effectiveness across varying flow conditions. In addition to its adaptability, our GNN-based surrogate model, integrated with the shape optimization algorithm, exhibits a computational speed-up of three orders of magnitude compared to full-order online optimization applications while maintaining high accuracy. This work demonstrates the potential of GNNs as an efficient data-driven approach for fluid-acoustic shape optimization via adaptive morphing of structures.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"441 ","pages":"Article 117921"},"PeriodicalIF":6.9,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-02DOI: 10.1016/j.cma.2025.117909
N. Sukumar , Amit Acharya
Many partial differential equations (PDEs) such as Navier–Stokes equations in fluid mechanics, inelastic deformation in solids, and transient parabolic and hyperbolic equations do not have an exact, primal variational structure. Recently, a variational principle based on the dual (Lagrange multiplier) field was proposed. The essential idea in this approach is to treat the given PDEs as constraints, and to invoke an arbitrarily chosen auxiliary potential with strong convexity properties to be optimized. On requiring the vanishing of the gradient of the Lagrangian with respect to the primal variables, a mapping from the dual to the primal fields is obtained. This leads to requiring a convex dual functional to be minimized subject to Dirichlet boundary conditions on dual variables, with the guarantee that even PDEs that do not possess a variational structure in primal form can be solved via a variational principle. The vanishing of the first variation of the dual functional is, up to Dirichlet boundary conditions on dual fields, the weak form of the primal PDE problem with the dual-to-primal change of variables incorporated. We derive the dual weak form for the linear, one-dimensional, transient convection–diffusion equation. A Galerkin discretization is used to obtain the discrete equations, with the trial and test functions in one dimension chosen as linear combination of either shallow neural networks with Rectified Power Unit (RePU) activation functions or B-spline basis functions; the corresponding stiffness matrix is symmetric. For transient problems, a space–time Galerkin implementation is used with tensor-product B-splines as approximating functions. Numerical results are presented for the steady-state and transient convection–diffusion equations and transient heat conduction. The proposed method delivers sound accuracy for ODEs and PDEs and rates of convergence are established in the norm and seminorm for the steady-state convection–diffusion problem.
{"title":"Variational formulation based on duality to solve partial differential equations: Use of B-splines and machine learning approximants","authors":"N. Sukumar , Amit Acharya","doi":"10.1016/j.cma.2025.117909","DOIUrl":"10.1016/j.cma.2025.117909","url":null,"abstract":"<div><div>Many partial differential equations (PDEs) such as Navier–Stokes equations in fluid mechanics, inelastic deformation in solids, and transient parabolic and hyperbolic equations do not have an exact, primal variational structure. Recently, a variational principle based on the dual (Lagrange multiplier) field was proposed. The essential idea in this approach is to treat the given PDEs as constraints, and to invoke an arbitrarily chosen auxiliary potential with strong convexity properties to be optimized. On requiring the vanishing of the gradient of the Lagrangian with respect to the primal variables, a mapping from the dual to the primal fields is obtained. This leads to requiring a convex dual functional to be minimized subject to Dirichlet boundary conditions on dual variables, with the guarantee that even PDEs that do not possess a variational structure in primal form can be solved via a variational principle. The vanishing of the first variation of the dual functional is, up to Dirichlet boundary conditions on dual fields, the weak form of the primal PDE problem with the dual-to-primal change of variables incorporated. We derive the dual weak form for the linear, one-dimensional, transient convection–diffusion equation. A Galerkin discretization is used to obtain the discrete equations, with the trial and test functions in one dimension chosen as linear combination of either shallow neural networks with Rectified Power Unit (RePU) activation functions or B-spline basis functions; the corresponding stiffness matrix is symmetric. For transient problems, a space–time Galerkin implementation is used with tensor-product B-splines as approximating functions. Numerical results are presented for the steady-state and transient convection–diffusion equations and transient heat conduction. The proposed method delivers sound accuracy for ODEs and PDEs and rates of convergence are established in the <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> norm and <span><math><msup><mrow><mi>H</mi></mrow><mrow><mn>1</mn></mrow></msup></math></span> seminorm for the steady-state convection–diffusion problem.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"441 ","pages":"Article 117909"},"PeriodicalIF":6.9,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01DOI: 10.1016/j.cma.2025.117938
C.G. Krishnanunni , Tan Bui-Thanh
This work presents a two-stage adaptive framework for progressively developing deep neural network (DNN) architectures that generalize well for a given training data set. In the first stage, a layerwise training approach is adopted where a new layer is added each time and trained independently by freezing parameters in the previous layers. We impose desirable structures on the DNN by employing manifold regularization, sparsity regularization, and physics-informed terms. We introduce a stability-promoting concept as a desirable property for a learning algorithm and show that employing manifold regularization yields a stability-promoting algorithm. Further, we also derive the necessary conditions for the trainability of a newly added layer and investigate the training saturation problem. In the second stage of the algorithm (post-processing), a sequence of shallow networks is employed to extract information from the residual produced in the first stage, thereby improving the prediction accuracy. Numerical investigations on prototype regression and classification problems demonstrate that the proposed approach can outperform fully connected DNNs of the same size. Moreover, by equipping the physics-informed neural network (PINN) with the proposed adaptive architecture strategy to solve partial differential equations, we numerically show that adaptive PINNs not only are superior to standard PINNs but also produce interpretable hidden layers with provable stability. We also apply our architecture design strategy to solve inverse problems governed by elliptic partial differential equations.
{"title":"An adaptive and stability-promoting layerwise training approach for sparse deep neural network architecture","authors":"C.G. Krishnanunni , Tan Bui-Thanh","doi":"10.1016/j.cma.2025.117938","DOIUrl":"10.1016/j.cma.2025.117938","url":null,"abstract":"<div><div>This work presents a two-stage adaptive framework for progressively developing deep neural network (DNN) architectures that generalize well for a given training data set. In the first stage, a layerwise training approach is adopted where a new layer is added each time and trained independently by freezing parameters in the previous layers. We impose desirable structures on the DNN by employing manifold regularization, sparsity regularization, and physics-informed terms. We introduce a <span><math><mrow><mi>ɛ</mi><mo>−</mo><mi>δ</mi><mo>−</mo></mrow></math></span> stability-promoting concept as a desirable property for a learning algorithm and show that employing manifold regularization yields a <span><math><mrow><mi>ɛ</mi><mo>−</mo><mi>δ</mi></mrow></math></span> stability-promoting algorithm. Further, we also derive the necessary conditions for the trainability of a newly added layer and investigate the training saturation problem. In the second stage of the algorithm (post-processing), a sequence of shallow networks is employed to extract information from the residual produced in the first stage, thereby improving the prediction accuracy. Numerical investigations on prototype regression and classification problems demonstrate that the proposed approach can outperform fully connected DNNs of the same size. Moreover, by equipping the physics-informed neural network (PINN) with the proposed adaptive architecture strategy to solve partial differential equations, we numerically show that adaptive PINNs not only are superior to standard PINNs but also produce interpretable hidden layers with provable stability. We also apply our architecture design strategy to solve inverse problems governed by elliptic partial differential equations.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"441 ","pages":"Article 117938"},"PeriodicalIF":6.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}