Operator learning frameworks have recently emerged as an effective scientific machine learning tool for learning complex nonlinear operators of differential equations. Since neural operators learn an infinite-dimensional functional mapping, it is useful in applications requiring rapid prediction of solutions for a wide range of input functions. A task of a similar nature arises in many applications of uncertainty quantification, including reliability estimation and design under uncertainty, each of which demands thousands of samples subjected to a wide range of possible input conditions, an aspect to which neural operators are specialized. Although the neural operators are capable of learning complex nonlinear solution operators, they require an extensive amount of data for successful training. Unlike the applications in computer vision, the computational complexity of the numerical simulations and the cost of physical experiments contributing to the synthetic and real training data compromise the performance of the trained neural operator model, thereby directly impacting the accuracy of uncertainty quantification results. We aim to alleviate the data bottleneck by using multi-fidelity learning in neural operators, where a neural operator is trained by using a large amount of inexpensive low-fidelity data along with a small amount of expensive high-fidelity data. We propose the multi-fidelity wavelet neural operator, capable of learning solution operators from a multi-fidelity dataset, for efficient and effective data-driven reliability analysis of dynamical systems. We illustrate the performance of the proposed framework on bi-fidelity data simulated on coarse and refined grids for spatial and spatiotemporal systems.
{"title":"Multi-fidelity wavelet neural operator surrogate model for time-independent and time-dependent reliability analysis","authors":"Tapas Tripura , Akshay Thakur , Souvik Chakraborty","doi":"10.1016/j.probengmech.2024.103672","DOIUrl":"10.1016/j.probengmech.2024.103672","url":null,"abstract":"<div><p>Operator learning frameworks have recently emerged as an effective scientific machine learning tool for learning complex nonlinear operators of differential equations. Since neural operators learn an infinite-dimensional functional mapping, it is useful in applications requiring rapid prediction of solutions for a wide range of input functions. A task of a similar nature arises in many applications of uncertainty quantification, including reliability estimation and design under uncertainty, each of which demands thousands of samples subjected to a wide range of possible input conditions, an aspect to which neural operators are specialized. Although the neural operators are capable of learning complex nonlinear solution operators, they require an extensive amount of data for successful training. Unlike the applications in computer vision, the computational complexity of the numerical simulations and the cost of physical experiments contributing to the synthetic and real training data compromise the performance of the trained neural operator model, thereby directly impacting the accuracy of uncertainty quantification results. We aim to alleviate the data bottleneck by using multi-fidelity learning in neural operators, where a neural operator is trained by using a large amount of inexpensive low-fidelity data along with a small amount of expensive high-fidelity data. We propose the multi-fidelity wavelet neural operator, capable of learning solution operators from a multi-fidelity dataset, for efficient and effective data-driven reliability analysis of dynamical systems. We illustrate the performance of the proposed framework on bi-fidelity data simulated on coarse and refined grids for spatial and spatiotemporal systems.</p></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"77 ","pages":"Article 103672"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141934397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.probengmech.2024.103677
Jinju Tao , Jingran He , Beibei Xiong , Yupeng Song
The inherent variability of concrete significantly affects the structural safety and performance. The variability of concrete is a complex phenomenon influenced by multiple factors, including material properties, production processes, and environmental conditions. Understanding and quantifying the variability of concrete is crucial for reliable and safe structural design. Probabilistic methods are commonly used to account for concrete variability in structural design. In this paper, a composite random field approach combined with a hierarchy model is used to consider the multi-scale spatial variability of concrete. The random field of compressive strength is expressed as a sum of independent component random fields. To investigate the impact of concrete's spatial variability on structural response and failure modes, the failure analysis of a 115-m-tall chimney was conducted. The results indicate that the composite random field approach proves to be a valuable method for incorporating concrete's spatial variability at different scales. The spatial variability of concrete exerts a substantial influence on the potential positions where severe compressive damage might occur. Additionally, the failure modes are also affected by the spatial variability of concrete. When taking into account the spatial variability of concrete, an extra collapse mode emerges, aligning more closely with the chimney's actual collapse mode during an earthquake. Furthermore, the spatial variability of concrete also moderately impacts the variability of the base shear force and the maximum inter-section drift angle. Notably, improper approaches to considering the spatial variability of concrete significantly impact the concrete's compressive damage and structural response.
{"title":"Description of the spatial variability of concrete via composite random field and failure analysis of chimney","authors":"Jinju Tao , Jingran He , Beibei Xiong , Yupeng Song","doi":"10.1016/j.probengmech.2024.103677","DOIUrl":"10.1016/j.probengmech.2024.103677","url":null,"abstract":"<div><p>The inherent variability of concrete significantly affects the structural safety and performance. The variability of concrete is a complex phenomenon influenced by multiple factors, including material properties, production processes, and environmental conditions. Understanding and quantifying the variability of concrete is crucial for reliable and safe structural design. Probabilistic methods are commonly used to account for concrete variability in structural design. In this paper, a composite random field approach combined with a hierarchy model is used to consider the multi-scale spatial variability of concrete. The random field of compressive strength is expressed as a sum of independent component random fields. To investigate the impact of concrete's spatial variability on structural response and failure modes, the failure analysis of a 115-m-tall chimney was conducted. The results indicate that the composite random field approach proves to be a valuable method for incorporating concrete's spatial variability at different scales. The spatial variability of concrete exerts a substantial influence on the potential positions where severe compressive damage might occur. Additionally, the failure modes are also affected by the spatial variability of concrete. When taking into account the spatial variability of concrete, an extra collapse mode emerges, aligning more closely with the chimney's actual collapse mode during an earthquake. Furthermore, the spatial variability of concrete also moderately impacts the variability of the base shear force and the maximum inter-section drift angle. Notably, improper approaches to considering the spatial variability of concrete significantly impact the concrete's compressive damage and structural response.</p></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"77 ","pages":"Article 103677"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141992670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.probengmech.2024.103662
Pijus Rajak, Pronab Roy
In spite of recent advancements in reliability analysis, high-dimensional and low-failure probability problems remain challenging because many samples and function calls are required for an accurate result. Function calls lead to a sharp increase in computational cost in terms of time. For this reason, an active learning algorithm is proposed using Kriging metamodel, where an unsupervised algorithm is used to select training samples from random samples for the first and second iterations. Then, the metamodel is improved iteratively by enriching the concerned domain with samples near the limit state function and samples obtained from a space-filling design. Hence, rapid convergence with the minimum number of function calls occurs using this active learning algorithm. An efficient stopping criterion has been developed to avoid premature or late-mature terminations of the metamodel and to regulate the accuracy of the failure probability estimations. The efficacy of this algorithm is examined using relative error, number of function calls, and coefficient of efficiency in five examples which are based on high-dimensional and low-failure probability with random and interval variables.
{"title":"Efficient computing technique for reliability analysis of high-dimensional and low-failure probability problems using active learning method","authors":"Pijus Rajak, Pronab Roy","doi":"10.1016/j.probengmech.2024.103662","DOIUrl":"10.1016/j.probengmech.2024.103662","url":null,"abstract":"<div><p>In spite of recent advancements in reliability analysis, high-dimensional and low-failure probability problems remain challenging because many samples and function calls are required for an accurate result. Function calls lead to a sharp increase in computational cost in terms of time. For this reason, an active learning algorithm is proposed using Kriging metamodel, where an unsupervised algorithm is used to select training samples from random samples for the first and second iterations. Then, the metamodel is improved iteratively by enriching the concerned domain with samples near the limit state function and samples obtained from a space-filling design. Hence, rapid convergence with the minimum number of function calls occurs using this active learning algorithm. An efficient stopping criterion has been developed to avoid premature or late-mature terminations of the metamodel and to regulate the accuracy of the failure probability estimations. The efficacy of this algorithm is examined using relative error, number of function calls, and coefficient of efficiency in five examples which are based on high-dimensional and low-failure probability with random and interval variables.</p></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"77 ","pages":"Article 103662"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141729734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article addresses the probabilistic nature of fatigue life in structures subjected to cyclic loading with variable amplitude. Drawing on the formalization of Miner’s cumulative damage rule that we introduced in the recent article (Cartiaux et al., 2023), we apply our methodology to estimate the survival probability of an industrial structure using experimental data. The study considers both the randomness in the initial state of the structure and in the amplitude of loading cycles. The results indicate that the variability of loading cycles can be captured through the concept of deterministic equivalent damage, providing a computationally efficient method for assessing the fatigue life of the structure. Furthermore, the article highlights that the usual combination of Miner’s rule and of the weakest link principle systematically overestimates the structure’s fatigue life. On the case study that we consider, this overestimation reaches a multiplicative factor of more than two. We then describe how the probabilistic framework that we have introduced offers a remedy to this overestimation.
{"title":"Survival probability of structures under fatigue: A data-based approach","authors":"François-Baptiste Cartiaux , Frédéric Legoll , Alex Libal , Julien Reygner","doi":"10.1016/j.probengmech.2024.103657","DOIUrl":"https://doi.org/10.1016/j.probengmech.2024.103657","url":null,"abstract":"<div><p>This article addresses the probabilistic nature of fatigue life in structures subjected to cyclic loading with variable amplitude. Drawing on the formalization of Miner’s cumulative damage rule that we introduced in the recent article (Cartiaux et al., 2023), we apply our methodology to estimate the survival probability of an industrial structure using experimental data. The study considers both the randomness in the initial state of the structure and in the amplitude of loading cycles. The results indicate that the variability of loading cycles can be captured through the concept of deterministic equivalent damage, providing a computationally efficient method for assessing the fatigue life of the structure. Furthermore, the article highlights that the usual combination of Miner’s rule and of the weakest link principle systematically overestimates the structure’s fatigue life. On the case study that we consider, this overestimation reaches a multiplicative factor of more than two. We then describe how the probabilistic framework that we have introduced offers a remedy to this overestimation.</p></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"77 ","pages":"Article 103657"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.probengmech.2024.103666
Sharif Rahman
Spline chaos expansion, referred to as SCE, is a finite series representation of an output random variable in terms of measure-consistent orthonormal splines in input random variables and deterministic coefficients. This paper reports new results from an assessment of SCE’s approximation quality in calculating higher-order moments, if they exist, of the output random variable. A novel mathematical proof is provided to demonstrate that the moment of SCE of an arbitrary order converges to the exact moment for any degree of splines as the largest element size decreases. Complementary numerical analyses have been conducted, producing results consistent with theoretical findings. A collection of simple yet relevant examples is presented to grade the approximation quality of SCE with that of the well-known polynomial chaos expansion (PCE). The results from these examples indicate that higher-order moments calculated using SCE converge for all cases considered in this study. In contrast, the moments of PCE of an order larger than two may or may not converge, depending on the regularity of the output function or the probability measure of input random variables. Moreover, when both SCE- and PCE-generated moments converge, the convergence rate of the former is markedly faster than the latter in the presence of nonsmooth functions or unbounded domains of input random variables.
{"title":"Higher-order moments of spline chaos expansion","authors":"Sharif Rahman","doi":"10.1016/j.probengmech.2024.103666","DOIUrl":"10.1016/j.probengmech.2024.103666","url":null,"abstract":"<div><p>Spline chaos expansion, referred to as SCE, is a finite series representation of an output random variable in terms of measure-consistent orthonormal splines in input random variables and deterministic coefficients. This paper reports new results from an assessment of SCE’s approximation quality in calculating higher-order moments, if they exist, of the output random variable. A novel mathematical proof is provided to demonstrate that the moment of SCE of an arbitrary order converges to the exact moment for any degree of splines as the largest element size decreases. Complementary numerical analyses have been conducted, producing results consistent with theoretical findings. A collection of simple yet relevant examples is presented to grade the approximation quality of SCE with that of the well-known polynomial chaos expansion (PCE). The results from these examples indicate that higher-order moments calculated using SCE converge for all cases considered in this study. In contrast, the moments of PCE of an order larger than two may or may not converge, depending on the regularity of the output function or the probability measure of input random variables. Moreover, when both SCE- and PCE-generated moments converge, the convergence rate of the former is markedly faster than the latter in the presence of nonsmooth functions or unbounded domains of input random variables.</p></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"77 ","pages":"Article 103666"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141840884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masonry arches and vaults are common historic structural elements that frequently experience asymmetric loading due to seismic action or abutment settlements. Over the past few decades, numerous studies have sought to enhance our understanding of the structural behavior of these elements for the purpose of preventive conservation. The assessment of the structural performance of existing constructions typically relies on effective numerical models guided by a set of unknown input parameters, including geometry, mechanical characteristics, physical properties, and boundary conditions. These parameters can be estimated through deterministic optimization functions aimed at minimizing the discrepancy between the output of a numerical model and the measured dynamic and/or static structural response. However, deterministic approaches overlook uncertainties associated with both input parameters and measurements. In this context, the Bayesian approach proves valuable for estimating unknown numerical model parameters and their associated uncertainties (posterior distributions). This involves updating prior knowledge of model parameters (prior distributions) based on current measurements and explicitly considering all sources of uncertainties affecting observed quantities through likelihood functions. However, two significant challenges arise: the likelihood function may be unknown or too complex to evaluate, and the computational costs for approximating the posterior distribution can be prohibitive. This study addresses these challenges by employing Approximate Bayesian Computation (ABC) to handle the intractable likelihood function. Additionally, the computational burden is mitigated through the use of accurate surrogate models such as Polynomial Chaos Expansions (PCE) and Artificial Neural Networks (ANN). The research focuses on setting up numerical models for simple structural systems (tie-rods) and inferring unknown input parameters, such as mechanical properties and boundary conditions, through Bayesian updating based on observed structural responses (modal data, strains, displacements). The main novelties of this research regard, on the one hand, the proposal of a methodology for obtaining a reliable estimate of the axial force in ancient tie-rods accounting for different sources of uncertainty and, on the other hand, the application of ABC to obtain the structural identification inverse problem solution.
{"title":"Approximate Bayesian Computation for structural identification of ancient tie-rods using noisy modal data","authors":"Silvia Monchetti , Chiara Pepi , Cecilia Viscardi , Massimiliano Gioffrè","doi":"10.1016/j.probengmech.2024.103674","DOIUrl":"10.1016/j.probengmech.2024.103674","url":null,"abstract":"<div><p>Masonry arches and vaults are common historic structural elements that frequently experience asymmetric loading due to seismic action or abutment settlements. Over the past few decades, numerous studies have sought to enhance our understanding of the structural behavior of these elements for the purpose of preventive conservation. The assessment of the structural performance of existing constructions typically relies on effective numerical models guided by a set of unknown input parameters, including geometry, mechanical characteristics, physical properties, and boundary conditions. These parameters can be estimated through deterministic optimization functions aimed at minimizing the discrepancy between the output of a numerical model and the measured dynamic and/or static structural response. However, deterministic approaches overlook uncertainties associated with both input parameters and measurements. In this context, the Bayesian approach proves valuable for estimating unknown numerical model parameters and their associated uncertainties (posterior distributions). This involves updating prior knowledge of model parameters (prior distributions) based on current measurements and explicitly considering all sources of uncertainties affecting observed quantities through likelihood functions. However, two significant challenges arise: the likelihood function may be unknown or too complex to evaluate, and the computational costs for approximating the posterior distribution can be prohibitive. This study addresses these challenges by employing Approximate Bayesian Computation (ABC) to handle the intractable likelihood function. Additionally, the computational burden is mitigated through the use of accurate surrogate models such as Polynomial Chaos Expansions (PCE) and Artificial Neural Networks (ANN). The research focuses on setting up numerical models for simple structural systems (tie-rods) and inferring unknown input parameters, such as mechanical properties and boundary conditions, through Bayesian updating based on observed structural responses (modal data, strains, displacements). The main novelties of this research regard, on the one hand, the proposal of a methodology for obtaining a reliable estimate of the axial force in ancient tie-rods accounting for different sources of uncertainty and, on the other hand, the application of ABC to obtain the structural identification inverse problem solution.</p></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"77 ","pages":"Article 103674"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0266892024000961/pdfft?md5=4c484d9443faf2a5c3b2f4aa086ce2ff&pid=1-s2.0-S0266892024000961-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.probengmech.2024.103663
Awel Momhur , Y.X. Zhao , Abrham Gebre
A novel statistical method was developed to obtain a dynamic response with irregular line excitations and independent uncertain parameters. The proposed approach combines a three-dimensional vehicle-track coupling dynamics model and uncertainty parameters. Moreover, a new method is used to treat the dynamic indices: derailment coefficient, vertical/lateral wheel/rail force, vertical/lateral car body acceleration, and wheel reduction ratio. The model is validated by comparing simulations (deterministic) results with field measurements, which provide excellent agreement with limited data. According to the findings, the results reveal that the high vibration effect arises when the uncertainty parameter in the dynamic system exists. The total fit effects, the consistency of the vehicle safety, and the tail fit effects are determined for selecting the best method. Therefore, regarding the approach, the lognormal and extreme maximum distribution values may be the appropriate assumed distribution for dynamic safety under limited data.
{"title":"Assessment of random dynamic behavior for EMUs high-speed train based on Monte Carlo simulation","authors":"Awel Momhur , Y.X. Zhao , Abrham Gebre","doi":"10.1016/j.probengmech.2024.103663","DOIUrl":"10.1016/j.probengmech.2024.103663","url":null,"abstract":"<div><p>A novel statistical method was developed to obtain a dynamic response with irregular line excitations and independent uncertain parameters. The proposed approach combines a three-dimensional vehicle-track coupling dynamics model and uncertainty parameters. Moreover, a new method is used to treat the dynamic indices: derailment coefficient, vertical/lateral wheel/rail force, vertical/lateral car body acceleration, and wheel reduction ratio. The model is validated by comparing simulations (deterministic) results with field measurements, which provide excellent agreement with limited data. According to the findings, the results reveal that the high vibration effect arises when the uncertainty parameter in the dynamic system exists. The total fit effects, the consistency of the vehicle safety, and the tail fit effects are determined for selecting the best method. Therefore, regarding the approach, the lognormal and extreme maximum distribution values may be the appropriate assumed distribution for dynamic safety under limited data.</p></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"77 ","pages":"Article 103663"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141732037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Engineering structures may suffer from drastic nonlinear random vibrations in harsh environments. Random vibration has been extensively studied since 1960s, but is still an open problem for large-scale strongly nonlinear systems. In this paper, a random vibration analysis method based on the Neural Networks for large-scale strongly nonlinear systems under additive and/or multiplicative Gaussian white noise (GWN) excitations is proposed. In the proposed method, the high-dimensional steady-state Fokker–Planck-Kolmogorov (FPK) equation governing the state’s probability density function (PDF) is firstly reduced to low-dimensional FPK equation involving only the interested state variables, generally one or two dimensions. The equivalent drift coefficients (EDCs) and diffusion coefficients (EDFs) in the low-dimensional FPK equation are proven to be the conditional mean of the coefficients given the interested variables. Furthermore, it is shown that the conditional mean can be optimally estimated by regression. Subsequently, the EDCs and EDFs, as functions of the retained variables, are approximated by the semi-analytical Radial Basis Functions Neural Networks trained with samples generated by a few deterministic analyses. Finally, the Physics Informed Neural Network is employed to solve the reduced steady-state FPK equation, and the PDF of the system responses is obtained. Four typical examples under additive and/or multiplicative GWN excitations are used to examine the accuracy and efficiency of the proposed method by comparing its results with the exact solution (if available) or Monte Carlo simulations. The proposed method also exhibits greater accuracy than the globally-evolving-based generalized density evolution equation scheme, a similar method of its kind, especially for strongly nonlinear systems. Notably, even though steady-state systems are applied in this paper, there is no obstacle to extending the proposed framework to transient systems.
{"title":"An efficient method for solving high-dimension stationary FPK equation of strongly nonlinear systems under additive and/or multiplicative white noise","authors":"Yangyang Xiao , Lincong Chen , Zhongdong Duan , Jianqiao Sun , Yanan Tang","doi":"10.1016/j.probengmech.2024.103668","DOIUrl":"10.1016/j.probengmech.2024.103668","url":null,"abstract":"<div><p>Engineering structures may suffer from drastic nonlinear random vibrations in harsh environments. Random vibration has been extensively studied since 1960s, but is still an open problem for large-scale strongly nonlinear systems. In this paper, a random vibration analysis method based on the Neural Networks for large-scale strongly nonlinear systems under additive and/or multiplicative Gaussian white noise (GWN) excitations is proposed. In the proposed method, the high-dimensional steady-state Fokker–Planck-Kolmogorov (FPK) equation governing the state’s probability density function (PDF) is firstly reduced to low-dimensional FPK equation involving only the interested state variables, generally one or two dimensions. The equivalent drift coefficients (EDCs) and diffusion coefficients (EDFs) in the low-dimensional FPK equation are proven to be the conditional mean of the coefficients given the interested variables. Furthermore, it is shown that the conditional mean can be optimally estimated by regression. Subsequently, the EDCs and EDFs, as functions of the retained variables, are approximated by the semi-analytical Radial Basis Functions Neural Networks trained with samples generated by a few deterministic analyses. Finally, the Physics Informed Neural Network is employed to solve the reduced steady-state FPK equation, and the PDF of the system responses is obtained. Four typical examples under additive and/or multiplicative GWN excitations are used to examine the accuracy and efficiency of the proposed method by comparing its results with the exact solution (if available) or Monte Carlo simulations. The proposed method also exhibits greater accuracy than the globally-evolving-based generalized density evolution equation scheme, a similar method of its kind, especially for strongly nonlinear systems. Notably, even though steady-state systems are applied in this paper, there is no obstacle to extending the proposed framework to transient systems.</p></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"77 ","pages":"Article 103668"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141842248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.probengmech.2024.103667
Barbara Carrera , Iason Papaioannou
Sequential Monte Carlo (SMC) is a reliable method to generate samples from the posterior parameter distribution in a Bayesian updating context. The method samples a series of distributions sequentially, which start from the prior distribution and gradually approach the posterior distribution. Sampling from the distribution sequence is performed through application of a resample-move scheme, whereby the move step is performed using a Markov Chain Monte Carlo (MCMC) algorithm. The preconditioned Crank–Nicolson (pCN) is a popular choice for the MCMC step in high dimensional Bayesian updating problems, since its performance is invariant to the dimension of the prior distribution. This paper proposes two other SMC variants that use covariance information to inform the MCMC distribution proposals and compares their performance to the one of pCN-based SMC. Particularly, a variation of the pCN algorithm that employs covariance information, and the principle component Metropolis Hastings algorithm are considered. These algorithms are combined with an intermittent and recursive updating scheme of the target distribution covariance matrix based on the current MCMC progress. We test the performance of the algorithms in three numerical examples; a two dimensional algebraic example, the estimation of the flexibility of a cantilever beam and the estimation of the hydraulic conductivity field of an aquifer. The results show that covariance-based MCMC algorithms are capable of producing smaller errors in parameter mean and variance and better estimates of the model evidence compared to the pCN approach.
{"title":"Covariance-based MCMC for high-dimensional Bayesian updating with Sequential Monte Carlo","authors":"Barbara Carrera , Iason Papaioannou","doi":"10.1016/j.probengmech.2024.103667","DOIUrl":"10.1016/j.probengmech.2024.103667","url":null,"abstract":"<div><p>Sequential Monte Carlo (SMC) is a reliable method to generate samples from the posterior parameter distribution in a Bayesian updating context. The method samples a series of distributions sequentially, which start from the prior distribution and gradually approach the posterior distribution. Sampling from the distribution sequence is performed through application of a resample-move scheme, whereby the move step is performed using a Markov Chain Monte Carlo (MCMC) algorithm. The preconditioned Crank–Nicolson (pCN) is a popular choice for the MCMC step in high dimensional Bayesian updating problems, since its performance is invariant to the dimension of the prior distribution. This paper proposes two other SMC variants that use covariance information to inform the MCMC distribution proposals and compares their performance to the one of pCN-based SMC. Particularly, a variation of the pCN algorithm that employs covariance information, and the principle component Metropolis Hastings algorithm are considered. These algorithms are combined with an intermittent and recursive updating scheme of the target distribution covariance matrix based on the current MCMC progress. We test the performance of the algorithms in three numerical examples; a two dimensional algebraic example, the estimation of the flexibility of a cantilever beam and the estimation of the hydraulic conductivity field of an aquifer. The results show that covariance-based MCMC algorithms are capable of producing smaller errors in parameter mean and variance and better estimates of the model evidence compared to the pCN approach.</p></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"77 ","pages":"Article 103667"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0266892024000894/pdfft?md5=bed64696875a3a53f78eb10e3b4d690e&pid=1-s2.0-S0266892024000894-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.probengmech.2024.103676
Oleg Gaidai , Alia Ashraf , Yu Cao , Jinlu Sheng , Yan Zhu , Hongchen Li
This study presents a state-of-the-art extreme-value-prediction methodology based on deconvolution that can be utilized in marine, offshore, and naval-engineering applications. First, a measured gust-windspeed dataset is utilized to illustrate the accuracy of the deconvolution method. Second, a real-time roll dynamics raw dataset measured onboard an operating loaded TEU2800 container vessel is analyzed, and the vessel motion data are measured during numerous trans-Atlantic crossings. The risk of container loss owing to excessive rolling motion is a key issue in cargo vessel transportation. The complex nonlinear and nonstationary characteristics of incoming waves and the associated cargo vessel movements render it challenging to accurately forecast excessive vessel roll angles. When a loaded cargo vessel sails through a harsh stormy environment, higher-order dynamic motion effects become evident and the effect of nonlinearities may increase significantly. Meanwhile, laboratory testing are affected by the wave parameters and similarity ratios used. Consequently, raw/unfiltered motion data obtained from cargo vessels traversing in adverse weather conditions provide valuable insights into cargo vessel reliability. Parametric extrapolations based on certain functional classes are typically employed to extrapolate and fit probability distributions estimated from the underlying dataset. This investigation aims to present an alternative nonparametric extrapolation methodology based on the intrinsic properties of the raw underlying dataset without introducing any assumptions regarding the extrapolation functional class.
This novel extrapolation deconvolution method is suitable for contemporary marine-engineering and design applications, as well as serves as an alternative to existing reliability methods. The prediction accuracy of the deconvolution methodology is demonstrated by comparing it with a modified four-parameter Weibull-type extrapolation technique. Compared with its counterpart sub-asymptotic statistical methods, such as the modified Weibull-type fit, peaks over the threshold, and generalized Pareto, the advocated deconvolution method is superior in term of its extrapolation numerical stability.
{"title":"Panamax cargo-vessel excessive-roll dynamics based on novel deconvolution method","authors":"Oleg Gaidai , Alia Ashraf , Yu Cao , Jinlu Sheng , Yan Zhu , Hongchen Li","doi":"10.1016/j.probengmech.2024.103676","DOIUrl":"10.1016/j.probengmech.2024.103676","url":null,"abstract":"<div><p>This study presents a state-of-the-art extreme-value-prediction methodology based on deconvolution that can be utilized in marine, offshore, and naval-engineering applications. First, a measured gust-windspeed dataset is utilized to illustrate the accuracy of the deconvolution method. Second, a real-time roll dynamics raw dataset measured onboard an operating loaded TEU2800 container vessel is analyzed, and the vessel motion data are measured during numerous trans-Atlantic crossings. The risk of container loss owing to excessive rolling motion is a key issue in cargo vessel transportation. The complex nonlinear and nonstationary characteristics of incoming waves and the associated cargo vessel movements render it challenging to accurately forecast excessive vessel roll angles. When a loaded cargo vessel sails through a harsh stormy environment, higher-order dynamic motion effects become evident and the effect of nonlinearities may increase significantly. Meanwhile, laboratory testing are affected by the wave parameters and similarity ratios used. Consequently, raw/unfiltered motion data obtained from cargo vessels traversing in adverse weather conditions provide valuable insights into cargo vessel reliability. Parametric extrapolations based on certain functional classes are typically employed to extrapolate and fit probability distributions estimated from the underlying dataset. This investigation aims to present an alternative nonparametric extrapolation methodology based on the intrinsic properties of the raw underlying dataset without introducing any assumptions regarding the extrapolation functional class.</p><p>This novel extrapolation deconvolution method is suitable for contemporary marine-engineering and design applications, as well as serves as an alternative to existing reliability methods. The prediction accuracy of the deconvolution methodology is demonstrated by comparing it with a modified four-parameter Weibull-type extrapolation technique. Compared with its counterpart sub-asymptotic statistical methods, such as the modified Weibull-type fit, peaks over the threshold, and generalized Pareto, the advocated deconvolution method is superior in term of its extrapolation numerical stability.</p></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"77 ","pages":"Article 103676"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141992664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}