Pub Date : 2024-03-29DOI: 10.1007/s00366-024-01959-3
Fatih Uzun, Alexander M. Korsunsky
This paper introduces the OxCM contour method solver, a console application structured based on the legacy version of the FEniCS open-source computing platform for solving partial differential equations (PDEs) using the finite element method (FEM). The solver provides a standardized approach to solving linear elastic numerical models, calculating residual stresses corresponding to measured displacements resulting from changes in the boundary conditions after minimally disturbing (non-contact) cutting. This is achieved through a single-line command, specifically in the case of availability of a domain composed of a tetrahedral mesh and experimentally collected and processed profilometry data. The solver is structured according to a static boundary condition rule, allowing it to rely solely on the cross-section occupied by the experimental data, independent of the geometric irregularities of the investigated body. This approach eliminates the need to create realistic finite element domains for complex-shaped, discontinuous processing bodies. While the contour method provides highly accurate quantification of residual stresses in parts with continuously processed properties, real scenarios often involve parts subjected to discontinuous processing and geometric irregularities. The solver’s validation is performed through numerical experiments representing both continuous and discontinuous processing conditions in artificially created domains with regular and irregular geometric features based on the eigenstrain theory. Numerical experiments, free from experimental errors, contribute to a novel understanding of the contour method's capabilities in reconstructing residual stresses in such bodies through a detailed error analysis. Furthermore, the application of the OxCM contour method solver in a real-case scenario involving a nickel-based superalloy finite-length weldment is demonstrated. The results exhibit the expected distribution of the longitudinal component of residual stresses along the long-transverse direction, consistent with the solution of a commercial solver that was validated by neutron diffraction strain scanning.
{"title":"The OxCM contour method solver for residual stress evaluation","authors":"Fatih Uzun, Alexander M. Korsunsky","doi":"10.1007/s00366-024-01959-3","DOIUrl":"https://doi.org/10.1007/s00366-024-01959-3","url":null,"abstract":"<p>This paper introduces the OxCM contour method solver, a console application structured based on the legacy version of the FEniCS open-source computing platform for solving partial differential equations (PDEs) using the finite element method (FEM). The solver provides a standardized approach to solving linear elastic numerical models, calculating residual stresses corresponding to measured displacements resulting from changes in the boundary conditions after minimally disturbing (non-contact) cutting. This is achieved through a single-line command, specifically in the case of availability of a domain composed of a tetrahedral mesh and experimentally collected and processed profilometry data. The solver is structured according to a static boundary condition rule, allowing it to rely solely on the cross-section occupied by the experimental data, independent of the geometric irregularities of the investigated body. This approach eliminates the need to create realistic finite element domains for complex-shaped, discontinuous processing bodies. While the contour method provides highly accurate quantification of residual stresses in parts with continuously processed properties, real scenarios often involve parts subjected to discontinuous processing and geometric irregularities. The solver’s validation is performed through numerical experiments representing both continuous and discontinuous processing conditions in artificially created domains with regular and irregular geometric features based on the eigenstrain theory. Numerical experiments, free from experimental errors, contribute to a novel understanding of the contour method's capabilities in reconstructing residual stresses in such bodies through a detailed error analysis. Furthermore, the application of the OxCM contour method solver in a real-case scenario involving a nickel-based superalloy finite-length weldment is demonstrated. The results exhibit the expected distribution of the longitudinal component of residual stresses along the long-transverse direction, consistent with the solution of a commercial solver that was validated by neutron diffraction strain scanning.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"112 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140324941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1007/s00366-024-01965-5
Giuliano Guarino, Pablo Antolin, Alberto Milazzo, Annalisa Buffa
This work focuses on the coupling of trimmed shell patches using Isogeometric Analysis, based on higher continuity splines that seamlessly meet the (C^1) requirement of Kirchhoff–Love-based discretizations. Weak enforcement of coupling conditions is achieved through the symmetric interior penalty method, where the fluxes are computed using their correct variationally consistent expression that was only recently proposed and is unprecedentedly adopted herein in the context of coupling conditions. The constitutive relationship accounts for generically laminated materials, although the proposed tests are conducted under the assumption of uniform thickness and lamination sequence. Numerical experiments assess the method for an isotropic and a laminated plate, as well as an isotropic hyperbolic paraboloid shell from the new shell obstacle course. The boundary conditions and domain force are chosen to reproduce manufactured analytical solutions, which are taken as reference to compute rigorous convergence curves in the (L^2), (H^1), and (H^2) norms, that closely approach optimal ones predicted by theory. Additionally, we conduct a final test on a complex structure comprising five intersecting laminated cylindrical shells, whose geometry is directly imported from a STEP file. The results exhibit excellent agreement with those obtained through commercial software, showcasing the method’s potential for real-world industrial applications.
{"title":"An interior penalty coupling strategy for isogeometric non-conformal Kirchhoff–Love shell patches","authors":"Giuliano Guarino, Pablo Antolin, Alberto Milazzo, Annalisa Buffa","doi":"10.1007/s00366-024-01965-5","DOIUrl":"https://doi.org/10.1007/s00366-024-01965-5","url":null,"abstract":"<p>This work focuses on the coupling of trimmed shell patches using Isogeometric Analysis, based on higher continuity splines that seamlessly meet the <span>(C^1)</span> requirement of Kirchhoff–Love-based discretizations. Weak enforcement of coupling conditions is achieved through the symmetric interior penalty method, where the fluxes are computed using their correct variationally consistent expression that was only recently proposed and is unprecedentedly adopted herein in the context of coupling conditions. The constitutive relationship accounts for generically laminated materials, although the proposed tests are conducted under the assumption of uniform thickness and lamination sequence. Numerical experiments assess the method for an isotropic and a laminated plate, as well as an isotropic hyperbolic paraboloid shell from the new shell obstacle course. The boundary conditions and domain force are chosen to reproduce manufactured analytical solutions, which are taken as reference to compute rigorous convergence curves in the <span>(L^2)</span>, <span>(H^1)</span>, and <span>(H^2)</span> norms, that closely approach optimal ones predicted by theory. Additionally, we conduct a final test on a complex structure comprising five intersecting laminated cylindrical shells, whose geometry is directly imported from a STEP file. The results exhibit excellent agreement with those obtained through commercial software, showcasing the method’s potential for real-world industrial applications.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"26 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1007/s00366-024-01951-x
Zheng Guojun, Li Runjin, Shen Guozhe, Zhang Xiangkui
Loaded shell structures may deform, rotate, and crack, leading to fracture. The traditional finite element method describes material internal forces through differential equations, posing challenges in handling discontinuities and complicating fracture problem resolution. Peridynamics (PD), employing integral equations, presents advantages for fracture analysis. However, as a non-local theory, PD requires discretizing materials into nodes and establishing interactions through bonds, leading to reduce computational efficiency. This study introduces a GPU-based parallel PD algorithm for large deformation problems in shell structures within the compute unified device architecture (CUDA) framework. The algorithm incorporates element mapping and bond mapping for high parallelism. The algorithm optimizes data structures and GPU memory usage for efficient parallel computing. The parallel computing capabilities of GPU expedite crack analysis simulations, greatly reducing the time required to address large deformation problems. Experimental tests confirm the algorithm’s accuracy, efficiency, and value for engineering applications, demonstrating its potential to advance fracture analysis in shell structures.
{"title":"A parallel acceleration GPU algorithm for large deformation of thin shell structures based on peridynamics","authors":"Zheng Guojun, Li Runjin, Shen Guozhe, Zhang Xiangkui","doi":"10.1007/s00366-024-01951-x","DOIUrl":"https://doi.org/10.1007/s00366-024-01951-x","url":null,"abstract":"<p>Loaded shell structures may deform, rotate, and crack, leading to fracture. The traditional finite element method describes material internal forces through differential equations, posing challenges in handling discontinuities and complicating fracture problem resolution. Peridynamics (PD), employing integral equations, presents advantages for fracture analysis. However, as a non-local theory, PD requires discretizing materials into nodes and establishing interactions through bonds, leading to reduce computational efficiency. This study introduces a GPU-based parallel PD algorithm for large deformation problems in shell structures within the compute unified device architecture (CUDA) framework. The algorithm incorporates element mapping and bond mapping for high parallelism. The algorithm optimizes data structures and GPU memory usage for efficient parallel computing. The parallel computing capabilities of GPU expedite crack analysis simulations, greatly reducing the time required to address large deformation problems. Experimental tests confirm the algorithm’s accuracy, efficiency, and value for engineering applications, demonstrating its potential to advance fracture analysis in shell structures.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"7 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140324901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-23DOI: 10.1007/s00366-024-01960-w
Kuan-Chung Lin, Ting-Wei Chen, Huai-Liang Hsieh
This study introduces an innovative dynamic infinite meshfree method for robust and efficient solutions to half-space problems. This approach seamlessly couples this method with the nodal integral reproducing kernel particle method to discretize half-spaces defined by an artificial boundary. The infinite meshfree shape function is uniquely constructed using the 1D reproducing kernel shape function combined with the boundary singular kernel method, ensuring the Kronecker delta property on artificial boundaries. Coupled with the wave-transfer function, the proposed approach models dissipation actions effectively. The infinite domain simulation employs the dummy node method, enhanced by Newton–Cotes integrals. To ensure solution stability and convergence, our approach is based on the Galerkin weak form of the domain integral method. To combat the challenges of instability and imprecision, we integrated the stabilized conforming nodal integration method and the naturally stable nodal integration. The proposed methods efficacy is validated through various benchmark problems, with preliminary results showcasing superior precision and stability.
{"title":"A stable and efficient infinite meshfree approach for solving half-space eat conduction problems","authors":"Kuan-Chung Lin, Ting-Wei Chen, Huai-Liang Hsieh","doi":"10.1007/s00366-024-01960-w","DOIUrl":"https://doi.org/10.1007/s00366-024-01960-w","url":null,"abstract":"<p>This study introduces an innovative dynamic infinite meshfree method for robust and efficient solutions to half-space problems. This approach seamlessly couples this method with the nodal integral reproducing kernel particle method to discretize half-spaces defined by an artificial boundary. The infinite meshfree shape function is uniquely constructed using the 1D reproducing kernel shape function combined with the boundary singular kernel method, ensuring the Kronecker delta property on artificial boundaries. Coupled with the wave-transfer function, the proposed approach models dissipation actions effectively. The infinite domain simulation employs the dummy node method, enhanced by Newton–Cotes integrals. To ensure solution stability and convergence, our approach is based on the Galerkin weak form of the domain integral method. To combat the challenges of instability and imprecision, we integrated the stabilized conforming nodal integration method and the naturally stable nodal integration. The proposed methods efficacy is validated through various benchmark problems, with preliminary results showcasing superior precision and stability.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"54 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140204397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.1007/s00366-024-01957-5
Fabio V. Difonzo, Luciano Lopez, Sabrina F. Pellegrino
Deep learning is a powerful tool for solving data driven differential problems and has come out to have successful applications in solving direct and inverse problems described by PDEs, even in presence of integral terms. In this paper, we propose to apply radial basis functions (RBFs) as activation functions in suitably designed Physics Informed Neural Networks (PINNs) to solve the inverse problem of computing the perydinamic kernel in the nonlocal formulation of classical wave equation, resulting in what we call RBF-iPINN. We show that the selection of an RBF is necessary to achieve meaningful solutions, that agree with the physical expectations carried by the data. We support our results with numerical examples and experiments, comparing the solution obtained with the proposed RBF-iPINN to the exact solutions.
{"title":"Physics informed neural networks for an inverse problem in peridynamic models","authors":"Fabio V. Difonzo, Luciano Lopez, Sabrina F. Pellegrino","doi":"10.1007/s00366-024-01957-5","DOIUrl":"https://doi.org/10.1007/s00366-024-01957-5","url":null,"abstract":"<p>Deep learning is a powerful tool for solving data driven differential problems and has come out to have successful applications in solving direct and inverse problems described by PDEs, even in presence of integral terms. In this paper, we propose to apply radial basis functions (RBFs) as activation functions in suitably designed Physics Informed Neural Networks (PINNs) to solve the inverse problem of computing the perydinamic kernel in the nonlocal formulation of classical wave equation, resulting in what we call RBF-iPINN. We show that the selection of an RBF is necessary to achieve meaningful solutions, that agree with the physical expectations carried by the data. We support our results with numerical examples and experiments, comparing the solution obtained with the proposed RBF-iPINN to the exact solutions.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"72 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140204233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-17DOI: 10.1007/s00366-023-01938-0
Karthikeyan Panneerselvam, Suvranu De
A moment–curvature constitutive model is proposed for the dynamic simulation of visco-plastic rods subject to time-varying loads and constraints at interactive rates. Smooth spline functions are used to discretize the geometry of the rod and its kinematics with the centerline coordinates as degrees of freedom (DOF) and scalar twist as degrees of freedom (DOF). The plastic curvature is defined as a uniformly varying field in contrast to localized lumped plasticity models, suitable for simulation of spatial rods that undergo uniform plastic deformation such as a cable or surgical suture thread. The yield criterion and plastic/visco-plastic flow rule are developed for spatial rods taking advantage of the availability of smooth moment–curvature fields using the spline-based formulation. With the Bishop frame field as a reference, the material curvatures are quantified using the twist degree of freedom, enabling tracking the plastic fields with scalar twist, thereby eliminating slopes as DOF. Taking advantage of the invariant sub-blocks and the sparsity of the dynamic system matrix arising from the numerical discretization, an hierarchical (H-matrix) solution approach is utilized for efficient computation. Uniform curvature bending tests and moment relaxation tests are performed to study the convergence behavior of the model. Several real-world tests involving contact are performed to demonstrate the applicability of the model in interactive simulations.
{"title":"A moment–curvature-based constitutive model for interactive simulation of visco-plastic rods","authors":"Karthikeyan Panneerselvam, Suvranu De","doi":"10.1007/s00366-023-01938-0","DOIUrl":"https://doi.org/10.1007/s00366-023-01938-0","url":null,"abstract":"<p>A moment–curvature constitutive model is proposed for the dynamic simulation of visco-plastic rods subject to time-varying loads and constraints at interactive rates. Smooth spline functions are used to discretize the geometry of the rod and its kinematics with the centerline coordinates as degrees of freedom (DOF) and scalar twist as degrees of freedom (DOF). The plastic curvature is defined as a uniformly varying field in contrast to localized lumped plasticity models, suitable for simulation of spatial rods that undergo uniform plastic deformation such as a cable or surgical suture thread. The yield criterion and plastic/visco-plastic flow rule are developed for spatial rods taking advantage of the availability of smooth moment–curvature fields using the spline-based formulation. With the Bishop frame field as a reference, the material curvatures are quantified using the twist degree of freedom, enabling tracking the plastic fields with scalar twist, thereby eliminating slopes as DOF. Taking advantage of the invariant sub-blocks and the sparsity of the dynamic system matrix arising from the numerical discretization, an hierarchical (H-matrix) solution approach is utilized for efficient computation. Uniform curvature bending tests and moment relaxation tests are performed to study the convergence behavior of the model. Several real-world tests involving contact are performed to demonstrate the applicability of the model in interactive simulations.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"4 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140150255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a machine learning-based mesh refinement technique for steady and unsteady incompressible flows. The clustering technique proposed by Otmani et al. (Phys Fluids 35(2):027112, 2023) is used to mark the viscous and turbulent regions for the flow past a cylinder at (Re=40) (steady laminar flow), at (Re=100) (unsteady laminar flow), and at (Re=3900) (unsteady turbulent flow). Within this clustered region, we use high mesh resolution, while downgrading the resolution outside, to show that it is possible to obtain levels of accuracy similar to those obtained when using a uniformly refined mesh. The mesh adaptation is effective, as the clustering successfully identifies the two flow regions, a viscous/turbulent dominated region (including the boundary layer and wake) that requires high resolution and an inviscid/irrotational region, which only requires low resolution. The new clustering sensor is compared with traditional feature-based sensors (Q-criterion and vorticity based) commonly used for mesh adaptation. Unlike traditional sensors that rely on problem-dependent thresholds, our novel approach eliminates the need for such thresholds and locates the regions that require adaptation. After the initial validation using flows past cylinders, the clustering technique is applied in an engineering context to study the flow around a horizontal axis wind turbine configuration which has been tested experimentally at the Norwegian University of Science and Technology. The data used within this framework are generated using a high-order discontinuous Galerkin solver, allowing to locally refine the polynomial order (p-refinement) in each element of the clustered region. For the laminar test cases, we can reduce the computational cost by 32% (steady (Re=40) case) and 20% (unsteady (Re=100) case), while we get a reduction of 33% for the (Re=3900) turbulent case. In the context of the wind turbine, a reduction of 43% in computational cost is observed, while maintaining the accuracy.
{"title":"Machine learning mesh-adaptation for laminar and turbulent flows: applications to high-order discontinuous Galerkin solvers","authors":"Kenza Tlales, Kheir-Eddine Otmani, Gerasimos Ntoukas, Gonzalo Rubio, Esteban Ferrer","doi":"10.1007/s00366-024-01950-y","DOIUrl":"https://doi.org/10.1007/s00366-024-01950-y","url":null,"abstract":"<p>We present a machine learning-based mesh refinement technique for steady and unsteady incompressible flows. The clustering technique proposed by Otmani et al. (Phys Fluids 35(2):027112, 2023) is used to mark the viscous and turbulent regions for the flow past a cylinder at <span>(Re=40)</span> (steady laminar flow), at <span>(Re=100)</span> (unsteady laminar flow), and at <span>(Re=3900)</span> (unsteady turbulent flow). Within this clustered region, we use high mesh resolution, while downgrading the resolution outside, to show that it is possible to obtain levels of accuracy similar to those obtained when using a uniformly refined mesh. The mesh adaptation is effective, as the clustering successfully identifies the two flow regions, a viscous/turbulent dominated region (including the boundary layer and wake) that requires high resolution and an inviscid/irrotational region, which only requires low resolution. The new clustering sensor is compared with traditional feature-based sensors (Q-criterion and vorticity based) commonly used for mesh adaptation. Unlike traditional sensors that rely on problem-dependent thresholds, our novel approach eliminates the need for such thresholds and locates the regions that require adaptation. After the initial validation using flows past cylinders, the clustering technique is applied in an engineering context to study the flow around a horizontal axis wind turbine configuration which has been tested experimentally at the Norwegian University of Science and Technology. The data used within this framework are generated using a high-order discontinuous Galerkin solver, allowing to locally refine the polynomial order (<i>p</i>-refinement) in each element of the clustered region. For the laminar test cases, we can reduce the computational cost by 32% (steady <span>(Re=40)</span> case) and 20% (unsteady <span>(Re=100)</span> case), while we get a reduction of 33% for the <span>(Re=3900)</span> turbulent case. In the context of the wind turbine, a reduction of 43% in computational cost is observed, while maintaining the accuracy.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"33 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140150149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The evolution of brittle fracture in a material can be conveniently investigated by means of the phase-field technique introducing a smooth crack density functional. Following Borden et al. (2014), two distinct types of phase-field functional are considered: (i) a second-order model and (ii) a fourth-order one. The latter approach involves the bi-Laplacian of the phase field and therefore the resulting Galerkin form requires continuously differentiable basis functions: a condition we easily fulfill via Isogeometric Analysis. In this work, we provide an extensive comparison of the considered formulations performing several tests that progressively increase the complexity of the crack patterns. To measure the fracture length necessary in our accuracy evaluations, we propose an image-based algorithm that features an automatic skeletonization technique able to track complex fracture patterns. In all numerical results, damage irreversibility is handled in a straightforward and rigorous manner using the Projected Successive Over-Relaxation algorithm that is suitable to be adopted for both phase-field formulations since it can be used in combination with higher continuity isogeometric discretizations. Based on our results, the fourth-order approach provides higher rates of convergence and a greater accuracy. Moreover, we observe that fourth- and second-order models exhibit a comparable accuracy when the former methods employ a mesh-size approximately two times larger, entailing a substantial reduction of the computational effort.
通过引入光滑裂纹密度函数的相场技术,可以方便地研究材料中脆性断裂的演变过程。根据 Borden 等人(2014 年)的研究,我们考虑了两种不同类型的相场函数:(i) 二阶模型和 (ii) 四阶模型。后一种方法涉及相场的双拉普拉奇,因此产生的 Galerkin 形式需要连续可微分的基函数:通过等几何分析,我们很容易满足这一条件。在这项工作中,我们对所考虑的公式进行了广泛的比较,并进行了多次测试,逐步增加裂纹模式的复杂性。为了测量精度评估中所需的断裂长度,我们提出了一种基于图像的算法,该算法采用自动骨架化技术,能够跟踪复杂的断裂模式。在所有数值结果中,损害的不可逆性都是通过投影连续过回松算法以直接而严谨的方式处理的,该算法适合于两种相场公式,因为它可以与更高连续性的等距离散法结合使用。根据我们的结果,四阶方法的收敛率更高,精度更高。此外,我们还观察到,当四阶模型和二阶模型采用的网格尺寸大约大两倍时,二阶模型和四阶模型的精度相当,这意味着计算量大大减少。
{"title":"Higher order phase-field modeling of brittle fracture via isogeometric analysis","authors":"Luigi Greco, Alessia Patton, Matteo Negri, Alessandro Marengo, Umberto Perego, Alessandro Reali","doi":"10.1007/s00366-024-01949-5","DOIUrl":"https://doi.org/10.1007/s00366-024-01949-5","url":null,"abstract":"<p>The evolution of brittle fracture in a material can be conveniently investigated by means of the phase-field technique introducing a smooth crack density functional. Following <i>Borden et al. (2014)</i>, two distinct types of phase-field functional are considered: (i) a second-order model and (ii) a fourth-order one. The latter approach involves the bi-Laplacian of the phase field and therefore the resulting Galerkin form requires continuously differentiable basis functions: a condition we easily fulfill <i>via</i> Isogeometric Analysis. In this work, we provide an extensive comparison of the considered formulations performing several tests that progressively increase the complexity of the crack patterns. To measure the fracture length necessary in our accuracy evaluations, we propose an image-based algorithm that features an automatic skeletonization technique able to track complex fracture patterns. In all numerical results, damage irreversibility is handled in a straightforward and rigorous manner using the Projected Successive Over-Relaxation algorithm that is suitable to be adopted for both phase-field formulations since it can be used in combination with higher continuity isogeometric discretizations. Based on our results, the fourth-order approach provides higher rates of convergence and a greater accuracy. Moreover, we observe that fourth- and second-order models exhibit a comparable accuracy when the former methods employ a mesh-size approximately two times larger, entailing a substantial reduction of the computational effort.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"68 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140126822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-12DOI: 10.1007/s00366-023-01941-5
Da-Wei Jia, Zi-Yan Wu
An improved adaptive Kriging model-based metamodel importance sampling (IS) reliability analysis method is proposed to increase the efficiency of failure probability calculation. First, the silhouette plot method is introduced to judge the optimal number of clusters for k-means to establish the IS density function, thus avoiding the problem of only assuming clusters arbitrarily. Second, considering the prediction uncertainty of the Kriging model, a novel learning function established from the uncertainty of failure probability is proposed for adaptive Kriging model establishment. The proposed learning function is established based on the variance information of failure probability. The major benefit of the proposed learning function is that the distribution characteristic of the IS density function is considered, thus fully reflecting the impact of the IS function on active learning. Finally, the coefficient of variation (COV) information of failure probability is adopted to define a novel stopping criterion for learning function. The performance of the proposed method is verified through different numerical examples. The findings demonstrate that the refined learning strategy effectively identifies samples with substantial contributions to failure probability, showcasing commendable convergence. Particularly notable is its capacity to significantly reduce function call volumes with heightened accuracy for scenarios featuring variable dimensions below 10.
本文提出了一种基于克利金模型的改进型元模型重要性抽样(IS)可靠性分析方法,以提高故障概率计算的效率。首先,引入剪影图法来判断 k-means 建立 IS 密度函数的最佳簇数,从而避免了只任意假设簇数的问题。其次,考虑到克里金模型预测的不确定性,提出了一种从故障概率的不确定性出发建立的新型学习函数,用于自适应克里金模型的建立。所提出的学习函数是基于故障概率的方差信息建立的。提出的学习函数的主要优点是考虑了 IS 密度函数的分布特征,从而充分反映了 IS 函数对主动学习的影响。最后,采用故障概率的变异系数(COV)信息为学习函数定义了一个新的停止准则。通过不同的数值示例验证了所提方法的性能。研究结果表明,改进后的学习策略能有效识别对故障概率有重大贡献的样本,其收敛性值得称赞。尤其值得注意的是,该方法能够显著减少函数调用量,并提高了 10 维以下变量场景的准确性。
{"title":"An improved adaptive Kriging model-based metamodel importance sampling reliability analysis method","authors":"Da-Wei Jia, Zi-Yan Wu","doi":"10.1007/s00366-023-01941-5","DOIUrl":"https://doi.org/10.1007/s00366-023-01941-5","url":null,"abstract":"<p>An improved adaptive Kriging model-based metamodel importance sampling (IS) reliability analysis method is proposed to increase the efficiency of failure probability calculation. First, the silhouette plot method is introduced to judge the optimal number of clusters for <i>k</i>-means to establish the IS density function, thus avoiding the problem of only assuming clusters arbitrarily. Second, considering the prediction uncertainty of the Kriging model, a novel learning function established from the uncertainty of failure probability is proposed for adaptive Kriging model establishment. The proposed learning function is established based on the variance information of failure probability. The major benefit of the proposed learning function is that the distribution characteristic of the IS density function is considered, thus fully reflecting the impact of the IS function on active learning. Finally, the coefficient of variation (COV) information of failure probability is adopted to define a novel stopping criterion for learning function. The performance of the proposed method is verified through different numerical examples. The findings demonstrate that the refined learning strategy effectively identifies samples with substantial contributions to failure probability, showcasing commendable convergence. Particularly notable is its capacity to significantly reduce function call volumes with heightened accuracy for scenarios featuring variable dimensions below 10.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"18 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140115131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08DOI: 10.1007/s00366-024-01953-9
Abstract
Both multi-material and stress-based topology optimization problems have been extensively investigated. However, there are few studies on the stress-based topology optimization of multi-material structures. Hence, this work proposes a novel topology optimization method for minimizing the maximum von Mises stress of structures with multiple materials under volume constraints. An extended Bi-directional Evolutionary Structural Optimization (BESO) method based on discrete variables which can mitigate the well-known stress singularity problem is adopted. The global von Mises stress is established with the p-norm function, and the adjoint sensitivity analysis is derived. Two benchmark numerical examples are investigated to validate the effectiveness of the proposed method. The effects of key parameters including p-norm, sensitivity and density filter radii on the optimized results and the stress distributions are discussed. The influence of varying mesh densities on the optimized topologies are investigated in comparison with the multi-material stiffness maximization design. The topological results, for multi-material stress design, indicate that the maximum stress can be reduced compared with multi-material stiffness design. It concludes that the proposed approach can achieve a reasonable design that effectively controls the stress level and reduces the stress concentration effect at the critical stress areas of multi-material structures.
摘要 多材料和基于应力的拓扑优化问题已得到广泛研究。然而,基于应力的多材料结构拓扑优化研究却很少。因此,本研究提出了一种新的拓扑优化方法,用于在体积约束条件下最小化多材料结构的最大 von Mises 应力。本文采用了一种基于离散变量的扩展双向进化结构优化(BESO)方法,该方法可以缓解众所周知的应力奇异性问题。利用 p-norm 函数建立了全局 von Mises 应力,并推导出了临界灵敏度分析。研究了两个基准数值实例,以验证所提方法的有效性。讨论了关键参数(包括 p-norm、灵敏度和密度滤波器半径)对优化结果和应力分布的影响。与多材料刚度最大化设计相比,研究了不同网格密度对优化拓扑结构的影响。多材料应力设计的拓扑结果表明,与多材料刚度设计相比,最大应力可以减小。结论是所提出的方法可以实现合理的设计,有效控制应力水平,并减少多材料结构关键应力区域的应力集中效应。
{"title":"Stress-based bi-directional evolutionary topology optimization for structures with multiple materials","authors":"","doi":"10.1007/s00366-024-01953-9","DOIUrl":"https://doi.org/10.1007/s00366-024-01953-9","url":null,"abstract":"<h3>Abstract</h3> <p>Both multi-material and stress-based topology optimization problems have been extensively investigated. However, there are few studies on the stress-based topology optimization of multi-material structures. Hence, this work proposes a novel topology optimization method for minimizing the maximum von Mises stress of structures with multiple materials under volume constraints. An extended Bi-directional Evolutionary Structural Optimization (BESO) method based on discrete variables which can mitigate the well-known stress singularity problem is adopted. The global von Mises stress is established with the <em>p</em>-norm function, and the adjoint sensitivity analysis is derived. Two benchmark numerical examples are investigated to validate the effectiveness of the proposed method. The effects of key parameters including <em>p</em>-norm, sensitivity and density filter radii on the optimized results and the stress distributions are discussed. The influence of varying mesh densities on the optimized topologies are investigated in comparison with the multi-material stiffness maximization design. The topological results, for multi-material stress design, indicate that the maximum stress can be reduced compared with multi-material stiffness design. It concludes that the proposed approach can achieve a reasonable design that effectively controls the stress level and reduces the stress concentration effect at the critical stress areas of multi-material structures.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"44 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140071177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}