We consider a mesh-based approach for training a neural network to produce field predictions of solutions to parametric partial differential equations (PDEs). This approach contrasts current approaches for “neural PDE solvers” that employ collocation-based methods to make pointwise predictions of solutions to PDEs. This approach has the advantage of naturally enforcing different boundary conditions as well as ease of invoking well-developed PDE theory—including analysis of numerical stability and convergence—to obtain capacity bounds for our proposed neural networks in discretized domains. We explore our mesh-based strategy, called NeuFENet, using a weighted Galerkin loss function based on the Finite Element Method (FEM) on a parametric elliptic PDE. The weighted Galerkin loss (FEM loss) is similar to an energy functional that produces improved solutions, satisfies a priori mesh convergence, and can model Dirichlet and Neumann boundary conditions. We prove theoretically, and illustrate with experiments, convergence results analogous to mesh convergence analysis deployed in finite element solutions to PDEs. These results suggest that a mesh-based neural network approach serves as a promising approach for solving parametric PDEs with theoretical bounds.
{"title":"NeuFENet: neural finite element solutions with theoretical bounds for parametric PDEs","authors":"Biswajit Khara, Aditya Balu, Ameya Joshi, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy, Baskar Ganapathysubramanian","doi":"10.1007/s00366-024-01955-7","DOIUrl":"https://doi.org/10.1007/s00366-024-01955-7","url":null,"abstract":"<p>We consider a mesh-based approach for training a neural network to produce field predictions of solutions to parametric partial differential equations (PDEs). This approach contrasts current approaches for “neural PDE solvers” that employ collocation-based methods to make pointwise predictions of solutions to PDEs. This approach has the advantage of naturally enforcing different boundary conditions as well as ease of invoking well-developed PDE theory—including analysis of numerical stability and convergence—to obtain capacity bounds for our proposed neural networks in discretized domains. We explore our mesh-based strategy, called <i>NeuFENet</i>, using a weighted Galerkin loss function based on the Finite Element Method (FEM) on a parametric elliptic PDE. The weighted Galerkin loss (FEM loss) is similar to an energy functional that produces improved solutions, satisfies <i>a priori</i> mesh convergence, and can model Dirichlet and Neumann boundary conditions. We prove theoretically, and illustrate with experiments, convergence results analogous to mesh convergence analysis deployed in finite element solutions to PDEs. These results suggest that a mesh-based neural network approach serves as a promising approach for solving parametric PDEs with theoretical bounds.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"65 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1007/s00366-024-01964-6
Silvia Hervas-Raluy, Diego Sainz-DeMena, Maria Jose Gomez-Benito, Jose Manuel García-Aznar
Childhood cancer is a devastating disease that requires continued research and improved treatment options to increase survival rates and quality of life for those affected. The response to cancer treatment can vary significantly among patients, highlighting the need for a deeper understanding of the underlying mechanisms involved in tumour growth and recovery to improve diagnostic and treatment strategies. Patient-specific models have emerged as a promising alternative to tackle the challenges in tumour mechanics through individualised simulation. In this study, we present a methodology to develop subject-specific tumour models, which incorporate the initial distribution of cell density, tumour vasculature, and tumour geometry obtained from clinical MRI imaging data. Tumour mechanics is simulated through the Finite Element method, coupling the dynamics of tumour growth and remodelling and the mechano-transport of oxygen and chemotherapy. These models enable a new application of tumour mechanics, namely predicting changes in tumour size and shape resulting from chemotherapeutic interventions for individual patients. Although the specific context of application in this work is neuroblastoma, the proposed methodologies can be extended to other solid tumours. Given the difficulty for treating paediatric solid tumours like neuroblastoma, this work includes two patients with different prognosis, who received chemotherapy treatment. The results obtained from the simulation are compared with the actual tumour size and shape from patients. Overall, the simulations provided clinically useful information to evaluate the effectiveness of the chemotherapy treatment in each case. These results suggest that the biomechanical model could be a valuable tool for personalised medicine in solid tumours.
{"title":"Image-based biomarkers for engineering neuroblastoma patient-specific computational models","authors":"Silvia Hervas-Raluy, Diego Sainz-DeMena, Maria Jose Gomez-Benito, Jose Manuel García-Aznar","doi":"10.1007/s00366-024-01964-6","DOIUrl":"https://doi.org/10.1007/s00366-024-01964-6","url":null,"abstract":"<p>Childhood cancer is a devastating disease that requires continued research and improved treatment options to increase survival rates and quality of life for those affected. The response to cancer treatment can vary significantly among patients, highlighting the need for a deeper understanding of the underlying mechanisms involved in tumour growth and recovery to improve diagnostic and treatment strategies. Patient-specific models have emerged as a promising alternative to tackle the challenges in tumour mechanics through individualised simulation. In this study, we present a methodology to develop subject-specific tumour models, which incorporate the initial distribution of cell density, tumour vasculature, and tumour geometry obtained from clinical MRI imaging data. Tumour mechanics is simulated through the Finite Element method, coupling the dynamics of tumour growth and remodelling and the mechano-transport of oxygen and chemotherapy. These models enable a new application of tumour mechanics, namely predicting changes in tumour size and shape resulting from chemotherapeutic interventions for individual patients. Although the specific context of application in this work is neuroblastoma, the proposed methodologies can be extended to other solid tumours. Given the difficulty for treating paediatric solid tumours like neuroblastoma, this work includes two patients with different prognosis, who received chemotherapy treatment. The results obtained from the simulation are compared with the actual tumour size and shape from patients. Overall, the simulations provided clinically useful information to evaluate the effectiveness of the chemotherapy treatment in each case. These results suggest that the biomechanical model could be a valuable tool for personalised medicine in solid tumours.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"116 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a new strong-form numerical method, the element differential method (EDM) is employed to solve two- and three-dimensional contact problems without friction. When using EDM, one can obtain the system of equations by directly differentiating the shape functions of Lagrange isoparametric elements for characterizing physical variables and geometry without the variational principle or any integration. Non-uniform contact discretization is used to enhance contact conditions, which avoids performing identical discretization along the contact surfaces of two contact objects. Two methods for imposing contact constraints are proposed. One method imposes Neumann boundary conditions on the contact surface, whereas the other directly applies the contact constraints as collocation equations for the nodes within the contact zone. The accuracy of the two methods is similar, but the multi-point constraints method does not increase the degrees of freedom of the system equations during the iteration process. The results of four numerical examples have verified the accuracy of the proposed method.
{"title":"Element differential method for contact problems with non-conforming contact discretization","authors":"Wei-Long Fan, Xiao-Wei Gao, Yong-Tong Zheng, Bing-Bing Xu, Hai-Feng Peng","doi":"10.1007/s00366-024-01963-7","DOIUrl":"https://doi.org/10.1007/s00366-024-01963-7","url":null,"abstract":"<p>In this paper, a new strong-form numerical method, the element differential method (EDM) is employed to solve two- and three-dimensional contact problems without friction. When using EDM, one can obtain the system of equations by directly differentiating the shape functions of Lagrange isoparametric elements for characterizing physical variables and geometry without the variational principle or any integration. Non-uniform contact discretization is used to enhance contact conditions, which avoids performing identical discretization along the contact surfaces of two contact objects. Two methods for imposing contact constraints are proposed. One method imposes Neumann boundary conditions on the contact surface, whereas the other directly applies the contact constraints as collocation equations for the nodes within the contact zone. The accuracy of the two methods is similar, but the multi-point constraints method does not increase the degrees of freedom of the system equations during the iteration process. The results of four numerical examples have verified the accuracy of the proposed method.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"46 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.1007/s00366-024-01970-8
Masoud Ezati, Mohsen Esmaeilbeigi, Ahmad Kamandi
Today, Physics-informed machine learning (PIML) methods are one of the effective tools with high flexibility for solving inverse problems and operational equations. Among these methods, physics-informed learning model built upon Gaussian processes (PIGP) has a special place due to provide the posterior probabilistic distribution of their predictions in the context of Bayesian inference. In this method, the training phase to determine the optimal hyper parameters is equivalent to the optimization of a non-convex function called the likelihood function. Due to access the explicit form of the gradient, it is recommended to use conjugate gradient (CG) optimization algorithms. In addition, due to the necessity of computation of the determinant and inverse of the covariance matrix in each evaluation of the likelihood function, it is recommended to use CG methods in such a way that it can be completed in the minimum number of evaluations. In previous studies, only special form of CG method has been considered, which naturally will not have high efficiency. In this paper, the efficiency of the CG methods for optimization of the likelihood function in PIGP has been studied. The results of the numerical simulations show that the initial step length and search direction in CG methods have a significant effect on the number of evaluations of the likelihood function and consequently on the efficiency of the PIGP. Also, according to the specific characteristics of the objective function in this problem, in the traditional CG methods, normalizing the initial step length to avoid getting stuck in bad conditioned points and improving the search direction by using angle condition to guarantee global convergence have been proposed. The results of numerical simulations obtained from the investigation of seven different improved CG methods with different angles in angle condition (four angles) and different initial step lengths (three step lengths), show the significant effect of the proposed modifications in reducing the number of iterations and the number of evaluations in different types of CG methods. This increases the efficiency of the PIGP method significantly, especially when the traditional CG algorithms fail in the optimization process, the improved algorithms perform well. Finally, in order to make it possible to implement the studies carried out in this paper for other parametric equations, the compiled package including the methods used in this paper is attached.
{"title":"Novel approaches for hyper-parameter tuning of physics-informed Gaussian processes: application to parametric PDEs","authors":"Masoud Ezati, Mohsen Esmaeilbeigi, Ahmad Kamandi","doi":"10.1007/s00366-024-01970-8","DOIUrl":"https://doi.org/10.1007/s00366-024-01970-8","url":null,"abstract":"<p>Today, Physics-informed machine learning (PIML) methods are one of the effective tools with high flexibility for solving inverse problems and operational equations. Among these methods, physics-informed learning model built upon Gaussian processes (PIGP) has a special place due to provide the posterior probabilistic distribution of their predictions in the context of Bayesian inference. In this method, the training phase to determine the optimal hyper parameters is equivalent to the optimization of a non-convex function called the likelihood function. Due to access the explicit form of the gradient, it is recommended to use conjugate gradient (CG) optimization algorithms. In addition, due to the necessity of computation of the determinant and inverse of the covariance matrix in each evaluation of the likelihood function, it is recommended to use CG methods in such a way that it can be completed in the minimum number of evaluations. In previous studies, only special form of CG method has been considered, which naturally will not have high efficiency. In this paper, the efficiency of the CG methods for optimization of the likelihood function in PIGP has been studied. The results of the numerical simulations show that the initial step length and search direction in CG methods have a significant effect on the number of evaluations of the likelihood function and consequently on the efficiency of the PIGP. Also, according to the specific characteristics of the objective function in this problem, in the traditional CG methods, normalizing the initial step length to avoid getting stuck in bad conditioned points and improving the search direction by using angle condition to guarantee global convergence have been proposed. The results of numerical simulations obtained from the investigation of seven different improved CG methods with different angles in angle condition (four angles) and different initial step lengths (three step lengths), show the significant effect of the proposed modifications in reducing the number of iterations and the number of evaluations in different types of CG methods. This increases the efficiency of the PIGP method significantly, especially when the traditional CG algorithms fail in the optimization process, the improved algorithms perform well. Finally, in order to make it possible to implement the studies carried out in this paper for other parametric equations, the compiled package including the methods used in this paper is attached.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"42 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.1007/s00366-024-01961-9
Pasquale Ambrosio, Salvatore Cuomo, Mariapia De Rosa
In recent years, Scientific Machine Learning (SciML) methods for solving Partial Differential Equations (PDEs) have gained increasing popularity. Within such a paradigm, Physics-Informed Neural Networks (PINNs) are novel deep learning frameworks for solving initial-boundary value problems involving nonlinear PDEs. Recently, PINNs have shown promising results in several application fields. Motivated by applications to gas filtration problems, here we present and evaluate a PINN-based approach to predict solutions to strongly degenerate parabolic problems with asymptotic structure of Laplacian type. To the best of our knowledge, this is one of the first papers demonstrating the efficacy of the PINN framework for solving such kind of problems. In particular, we estimate an appropriate approximation error for some test problems whose analytical solutions are fortunately known. The numerical experiments discussed include two and three-dimensional spatial domains, emphasizing the effectiveness of this approach in predicting accurate solutions.
{"title":"A physics-informed deep learning approach for solving strongly degenerate parabolic problems","authors":"Pasquale Ambrosio, Salvatore Cuomo, Mariapia De Rosa","doi":"10.1007/s00366-024-01961-9","DOIUrl":"https://doi.org/10.1007/s00366-024-01961-9","url":null,"abstract":"<p>In recent years, Scientific Machine Learning (SciML) methods for solving Partial Differential Equations (PDEs) have gained increasing popularity. Within such a paradigm, Physics-Informed Neural Networks (PINNs) are novel deep learning frameworks for solving initial-boundary value problems involving nonlinear PDEs. Recently, PINNs have shown promising results in several application fields. Motivated by applications to gas filtration problems, here we present and evaluate a PINN-based approach to predict solutions to <i>strongly degenerate parabolic problems with asymptotic structure of Laplacian type</i>. To the best of our knowledge, this is one of the first papers demonstrating the efficacy of the PINN framework for solving such kind of problems. In particular, we estimate an appropriate approximation error for some test problems whose analytical solutions are fortunately known. The numerical experiments discussed include two and three-dimensional spatial domains, emphasizing the effectiveness of this approach in predicting accurate solutions.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"92 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-06DOI: 10.1007/s00366-024-01972-6
Israel Alejandro Hernández-González, Enrique García-Macías
Model-based damage identification for structural health monitoring (SHM) remains an open issue in the literature. Along with the computational challenges related to the modeling of full-scale structures, classical single-model structural identification (St-Id) approaches provide no means to guarantee the physical meaningfulness of the inverse calibration results. In this light, this work introduces a novel methodology for model-driven damage identification based on multi-class digital models formed by a population of competing structural models, each representing a different failure mechanism. The forward models are replaced by computationally efficient meta-models, and continuously calibrated using monitoring data. If an anomaly in the structural performance is detected, a model selection approach based on the Bayesian information criterion (BIC) is used to identify the most plausibly activated failure mechanism. The potential of the proposed approach is illustrated through two case studies, including a numerical planar truss and a real-world historical construction: the Muhammad Tower in the Alhambra fortress.
{"title":"Towards a comprehensive damage identification of structures through populations of competing models","authors":"Israel Alejandro Hernández-González, Enrique García-Macías","doi":"10.1007/s00366-024-01972-6","DOIUrl":"https://doi.org/10.1007/s00366-024-01972-6","url":null,"abstract":"<p>Model-based damage identification for structural health monitoring (SHM) remains an open issue in the literature. Along with the computational challenges related to the modeling of full-scale structures, classical single-model structural identification (St-Id) approaches provide no means to guarantee the physical meaningfulness of the inverse calibration results. In this light, this work introduces a novel methodology for model-driven damage identification based on multi-class digital models formed by a population of competing structural models, each representing a different failure mechanism. The forward models are replaced by computationally efficient meta-models, and continuously calibrated using monitoring data. If an anomaly in the structural performance is detected, a model selection approach based on the Bayesian information criterion (BIC) is used to identify the most plausibly activated failure mechanism. The potential of the proposed approach is illustrated through two case studies, including a numerical planar truss and a real-world historical construction: the Muhammad Tower in the Alhambra fortress.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"139 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-04DOI: 10.1007/s00366-024-01962-8
Myeong-Seok Go, Young-Bae Kim, Jeong-Hoon Park, Jae Hyuk Lim, Jin-Gyun Kim
This study presents an efficient fixed-time increment-based approach for a data-driven analysis of flexible multibody dynamics (FMBD) problems, combining deep neural network (DNN) modeling and principal component analysis (PCA). To construct a DNN-based surrogate model, we eliminated the time instant in the input features while applying PCA to reduce the dimensionality of the output results, which encompassed transient dynamics such as displacement, stress, and strain. This restructuring allowed us to maintain the temporal information in the output data set while still formatting it in a fixed-time increment format, streamlining the process of training an efficient DNN model. Despite using fewer samples, this approach significantly reduces training costs compared to DNN model without PCA. Benchmark problems, including a double compound pendulum, piston-cylinder system, and deployable parabolic antenna, demonstrate that the proposed scheme drastically reduces training time while maintaining accuracy and quick prediction time.
{"title":"A rapidly trained DNN model for real-time flexible multibody dynamics simulations with a fixed-time increment","authors":"Myeong-Seok Go, Young-Bae Kim, Jeong-Hoon Park, Jae Hyuk Lim, Jin-Gyun Kim","doi":"10.1007/s00366-024-01962-8","DOIUrl":"https://doi.org/10.1007/s00366-024-01962-8","url":null,"abstract":"<p>This study presents an efficient fixed-time increment-based approach for a data-driven analysis of flexible multibody dynamics (FMBD) problems, combining deep neural network (DNN) modeling and principal component analysis (PCA). To construct a DNN-based surrogate model, we eliminated the time instant in the input features while applying PCA to reduce the dimensionality of the output results, which encompassed transient dynamics such as displacement, stress, and strain. This restructuring allowed us to maintain the temporal information in the output data set while still formatting it in a fixed-time increment format, streamlining the process of training an efficient DNN model. Despite using fewer samples, this approach significantly reduces training costs compared to DNN model without PCA. Benchmark problems, including a double compound pendulum, piston-cylinder system, and deployable parabolic antenna, demonstrate that the proposed scheme drastically reduces training time while maintaining accuracy and quick prediction time.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"4 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-03DOI: 10.1007/s00366-024-01968-2
Guglielmo Federico Antonio Brunetti, Mario Maiolo, Carmine Fallico, Gerardo Severino
Untangling flow and mass transport in aquifers is essential for effective water management and protection. However, understanding the mechanisms underlying such phenomena is challenging, particularly in highly heterogeneous natural aquifers. Past research has been limited by the lack of dense data series and experimental models that provide precise knowledge of such aquifer characteristics. To bridge this gap and advance our current understanding, we present the findings of a pioneering experimental investigation that characterizes a unique, strongly heterogeneous, laboratory-constructed phreatic aquifer at an intermediate scale under radial flow conditions. This strong heterogeneity was achieved by randomly distributing 2527 cells across 7 layers, each filled with one of 12 different soil mixtures, with their textural characteristics, porosity, and saturated hydraulic conductivity measured in the laboratory. We placed 37 fully penetrating piezometers radially at varying distances from the central pumping well, allowing for an extensive pumping test campaign to obtain saturated hydraulic conductivity values for each piezometer location and scaling laws along eight directions. Results reveal that the aquifer’s strong heterogeneity led to significant vertical and directional anisotropy in saturated hydraulic conductivity. Furthermore, we experimentally demonstrated for the first time that the porous medium tends toward homogeneity when transitioning from the scale of heterogeneity to the scale of investigation. These novel findings, obtained on a uniquely highly heterogeneous aquifer, contribute to the field and provide valuable insights for researchers studying flow and mass transport phenomena. The comprehensive dataset obtained will serve as a foundation for future research and as a tool to validate findings from previous studies on strongly heterogeneous aquifers.
{"title":"Unraveling the complexities of a highly heterogeneous aquifer under convergent radial flow conditions","authors":"Guglielmo Federico Antonio Brunetti, Mario Maiolo, Carmine Fallico, Gerardo Severino","doi":"10.1007/s00366-024-01968-2","DOIUrl":"https://doi.org/10.1007/s00366-024-01968-2","url":null,"abstract":"<p>Untangling flow and mass transport in aquifers is essential for effective water management and protection. However, understanding the mechanisms underlying such phenomena is challenging, particularly in highly heterogeneous natural aquifers. Past research has been limited by the lack of dense data series and experimental models that provide precise knowledge of such aquifer characteristics. To bridge this gap and advance our current understanding, we present the findings of a pioneering experimental investigation that characterizes a unique, strongly heterogeneous, laboratory-constructed phreatic aquifer at an intermediate scale under radial flow conditions. This strong heterogeneity was achieved by randomly distributing 2527 cells across 7 layers, each filled with one of 12 different soil mixtures, with their textural characteristics, porosity, and saturated hydraulic conductivity measured in the laboratory. We placed 37 fully penetrating piezometers radially at varying distances from the central pumping well, allowing for an extensive pumping test campaign to obtain saturated hydraulic conductivity values for each piezometer location and scaling laws along eight directions. Results reveal that the aquifer’s strong heterogeneity led to significant vertical and directional anisotropy in saturated hydraulic conductivity. Furthermore, we experimentally demonstrated for the first time that the porous medium tends toward homogeneity when transitioning from the scale of heterogeneity to the scale of investigation. These novel findings, obtained on a uniquely highly heterogeneous aquifer, contribute to the field and provide valuable insights for researchers studying flow and mass transport phenomena. The comprehensive dataset obtained will serve as a foundation for future research and as a tool to validate findings from previous studies on strongly heterogeneous aquifers.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"286 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1007/s00366-024-01954-8
Abdalla Elbana, Amar Khennane, Paul J. Hazell
This paper presents a novel and effective strategy for modelling three-dimensional periodic representative volume elements (RVE) of particulate composites. The proposed method aims to generate an RVE that can represent the microstructure of particulate composites with hollow spherical inclusions for homogenization (e.g., deriving the full-field effective elastic properties). The RVE features periodic and randomised geometry suitable for the application of periodic boundary conditions in finite element analysis. A robust algorithm is introduced following the combined theories of Monte Carlo and collision driven molecular dynamics to pack spherical particles in random spatial positions within the RVE. This novel technique can achieve a high particle-matrix volume ratio of up to 50% while still maintaining geometric periodicity across the domain and random distribution of inclusions within the RVE. Another algorithm is established to apply periodic boundary conditions (PBC) to precisely generate full field elastic properties of such microstructures. Furthermore, a user-friendly automatic ABAQUS CAE plug-in tool ‘Gen_PRVE’ is developed to generate three-dimensional RVE of any spherical particulate composite or porous material. Gen_PRVE provides users with a great deal of flexibility to generate Representative Volume Elements (RVEs) with varying side dimensions, sphere sizes, and periodic mesh resolutions. In addition, this tool can be effectively utilized to conduct a rapid mesh convergence study, an RVE size sensitivity study, and investigate the impact of inclusion/matrix volume fraction on the solution. Lastly, examples of these applications are presented.
{"title":"Multiscale modelling of particulate composites with spherical inclusions","authors":"Abdalla Elbana, Amar Khennane, Paul J. Hazell","doi":"10.1007/s00366-024-01954-8","DOIUrl":"https://doi.org/10.1007/s00366-024-01954-8","url":null,"abstract":"<p>This paper presents a novel and effective strategy for modelling three-dimensional periodic representative volume elements (RVE) of particulate composites. The proposed method aims to generate an RVE that can represent the microstructure of particulate composites with hollow spherical inclusions for homogenization (e.g., deriving the full-field effective elastic properties). The RVE features periodic and randomised geometry suitable for the application of periodic boundary conditions in finite element analysis. A robust algorithm is introduced following the combined theories of Monte Carlo and collision driven molecular dynamics to pack spherical particles in random spatial positions within the RVE. This novel technique can achieve a high particle-matrix volume ratio of up to 50% while still maintaining geometric periodicity across the domain and random distribution of inclusions within the RVE. Another algorithm is established to apply periodic boundary conditions (PBC) to precisely generate full field elastic properties of such microstructures. Furthermore, a user-friendly automatic ABAQUS CAE plug-in tool ‘<b>Gen_PRVE</b>’ is developed to generate three-dimensional RVE of any spherical particulate composite or porous material. <b>Gen_PRVE</b> provides users with a great deal of flexibility to generate Representative Volume Elements (RVEs) with varying side dimensions, sphere sizes, and periodic mesh resolutions. In addition, this tool can be effectively utilized to conduct a rapid mesh convergence study, an RVE size sensitivity study, and investigate the impact of inclusion/matrix volume fraction on the solution. Lastly, examples of these applications are presented.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"63 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.1007/s00366-024-01958-4
H. M. Verhelst, A. Mantzaflaris, M. Möller, J. H. Den Besten
Mesh adaptivity is a technique to provide detail in numerical solutions without the need to refine the mesh over the whole domain. Mesh adaptivity in isogeometric analysis can be driven by Truncated Hierarchical B-splines (THB-splines) which add degrees of freedom locally based on finer B-spline bases. Labeling of elements for refinement is typically done using residual-based error estimators. In this paper, an adaptive meshing workflow for isogeometric Kirchhoff–Love shell analysis is developed. This framework includes THB-splines, mesh admissibility for combined refinement and coarsening and the Dual-Weighted Residual (DWR) method for computing element-wise error contributions. The DWR can be used in several structural analysis problems, allowing the user to specify a goal quantity of interest which is used to mark elements and refine the mesh. This goal functional can involve, for example, displacements, stresses, eigenfrequencies etc. The proposed framework is evaluated through a set of different benchmark problems, including modal analysis, buckling analysis and non-linear snap-through and bifurcation problems, showing high accuracy of the DWR estimator and efficient allocation of degrees of freedom for advanced shell computations.
网格自适应是一种在数值求解中提供细节的技术,无需在整个域中细化网格。等距几何分析中的网格自适应可由截断分层 B 样条线(THB 样条线)驱动,该样条线基于更精细的 B 样条线基局部增加自由度。细化元素的标记通常使用基于残差的误差估算器来完成。本文开发了用于等几何基尔霍夫-洛夫壳分析的自适应网格划分工作流程。该框架包括 THB-样条、细化和粗化相结合的网格容许性以及计算元素误差贡献的双加权残差(DWR)方法。DWR 可用于多个结构分析问题,允许用户指定感兴趣的目标量,用于标记元素和细化网格。该目标函数可涉及位移、应力、特征频率等。通过一系列不同的基准问题,包括模态分析、屈曲分析以及非线性快穿和分叉问题,对所提出的框架进行了评估,结果表明 DWR 估计器具有很高的准确性,并能为高级壳计算有效分配自由度。
{"title":"Goal-adaptive Meshing of Isogeometric Kirchhoff–Love Shells","authors":"H. M. Verhelst, A. Mantzaflaris, M. Möller, J. H. Den Besten","doi":"10.1007/s00366-024-01958-4","DOIUrl":"https://doi.org/10.1007/s00366-024-01958-4","url":null,"abstract":"<p>Mesh adaptivity is a technique to provide detail in numerical solutions without the need to refine the mesh over the whole domain. Mesh adaptivity in isogeometric analysis can be driven by Truncated Hierarchical B-splines (THB-splines) which add degrees of freedom locally based on finer B-spline bases. Labeling of elements for refinement is typically done using residual-based error estimators. In this paper, an adaptive meshing workflow for isogeometric Kirchhoff–Love shell analysis is developed. This framework includes THB-splines, mesh admissibility for combined refinement and coarsening and the Dual-Weighted Residual (DWR) method for computing element-wise error contributions. The DWR can be used in several structural analysis problems, allowing the user to specify a goal quantity of interest which is used to mark elements and refine the mesh. This goal functional can involve, for example, displacements, stresses, eigenfrequencies etc. The proposed framework is evaluated through a set of different benchmark problems, including modal analysis, buckling analysis and non-linear snap-through and bifurcation problems, showing high accuracy of the DWR estimator and efficient allocation of degrees of freedom for advanced shell computations.</p>","PeriodicalId":11696,"journal":{"name":"Engineering with Computers","volume":"45 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}