Pub Date : 2025-10-01Epub Date: 2025-11-01DOI: 10.1016/j.probengmech.2025.103861
Yixin Lu , Zhenzhou Lu , Yuhua Yan , Hengchao Li
To address the challenge of estimating extremely small time-dependent failure probability (TDFP) high-dimensional input vector, we propose a sequential stratified importance sampling method (SSIS) with an ensemble stochastic configuration network (eSCN) embedded within SSIS (eSCN-SSIS) to improve efficiency. Initially, stratified cluster analysis is employed, enabling SSIS to construct a series of explicit importance sampling densities to explore the time-dependent failure domain layer by layer, thereby mitigating exploration of rare time-dependent failure domains and reducing variance in estimating extremely small TDFP. Subsequently, owing to the robust predictive capability of eSCN for high-dimensional input vector, eSCN is adaptively embedded into SSIS to replace the time-dependent performance function model; consequently, the required model evaluations are substantially reduced. Notably, even when applied to an explicit model, eSCN-SSIS is superior to Monte Carlo simulation (MCS), requiring considerably fewer model evaluations and shorter computational time. In contrast, although importance sampling based on the Kriging model surpassed MCS in term of model evaluations, it remained inferior in computational time. Owing to its hierarchical construction of explicit importance sampling densities and adaptive embedding of the eSCN, the proposed eSCN-SSIS applies to engineering problems characterized by extremely small TDFP and high-dimensional input vector, as verified by the presented examples.
{"title":"A sequential stratified importance sampling method for extremely small time-dependent failure probability with high-dimensional input vector","authors":"Yixin Lu , Zhenzhou Lu , Yuhua Yan , Hengchao Li","doi":"10.1016/j.probengmech.2025.103861","DOIUrl":"10.1016/j.probengmech.2025.103861","url":null,"abstract":"<div><div>To address the challenge of estimating extremely small time-dependent failure probability (TDFP) high-dimensional input vector, we propose a sequential stratified importance sampling method (SSIS) with an ensemble stochastic configuration network (eSCN) embedded within SSIS (eSCN-SSIS) to improve efficiency. Initially, stratified cluster analysis is employed, enabling SSIS to construct a series of explicit importance sampling densities to explore the time-dependent failure domain layer by layer, thereby mitigating exploration of rare time-dependent failure domains and reducing variance in estimating extremely small TDFP. Subsequently, owing to the robust predictive capability of eSCN for high-dimensional input vector, eSCN is adaptively embedded into SSIS to replace the time-dependent performance function model; consequently, the required model evaluations are substantially reduced. Notably, even when applied to an explicit model, eSCN-SSIS is superior to Monte Carlo simulation (MCS), requiring considerably fewer model evaluations and shorter computational time. In contrast, although importance sampling based on the Kriging model surpassed MCS in term of model evaluations, it remained inferior in computational time. Owing to its hierarchical construction of explicit importance sampling densities and adaptive embedding of the eSCN, the proposed eSCN-SSIS applies to engineering problems characterized by extremely small TDFP and high-dimensional input vector, as verified by the presented examples.</div></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"82 ","pages":"Article 103861"},"PeriodicalIF":3.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatigue scatter models allow the computation of uncertainties and confidence intervals linked to fatigue failure prediction. This phenomenon can be linked to a microscopic crack growth mechanism that is not modeled in S–N curves fatigue assessment approaches. The main fatigue scatter models found in the literature only allow the linear dependence of the cyclic load amplitudes to be modeled. The first contribution of this article is the development of a fatigue model that allows the prediction of nonlinear scatter dependence on load amplitude. More precisely, the proposed dependence structure is based on a partially affine function with a threshold effect, such that there is no dependence for load amplitudes sufficiently high. The model is successfully tested on a large fatigue database. A cumulative damage model is then obtained by adding two assumptions extracted from the literature. It is based on representing fatigue damage as the decrease in a structure’s fatigue health. The constructed model presents nonlinear cumulative damage properties and is successfully tested on two amplitude fatigue tests extracted from the literature. The whole fatigue failure prediction framework is finally applied to a real structure subjected to variable amplitude loadings.
{"title":"Development of a probabilistic health model representing variable amplitude fatigue loading damage in austenitic stainless steel nuclear components","authors":"Théo Lecleve , Stéphan Courtin , Fabien Szmytka , Chu Mai","doi":"10.1016/j.probengmech.2025.103851","DOIUrl":"10.1016/j.probengmech.2025.103851","url":null,"abstract":"<div><div>Fatigue scatter models allow the computation of uncertainties and confidence intervals linked to fatigue failure prediction. This phenomenon can be linked to a microscopic crack growth mechanism that is not modeled in S–N curves fatigue assessment approaches. The main fatigue scatter models found in the literature only allow the linear dependence of the cyclic load amplitudes to be modeled. The first contribution of this article is the development of a fatigue model that allows the prediction of nonlinear scatter dependence on load amplitude. More precisely, the proposed dependence structure is based on a partially affine function with a threshold effect, such that there is no dependence for load amplitudes sufficiently high. The model is successfully tested on a large fatigue database. A cumulative damage model is then obtained by adding two assumptions extracted from the literature. It is based on representing fatigue damage as the decrease in a structure’s fatigue health. The constructed model presents nonlinear cumulative damage properties and is successfully tested on two amplitude fatigue tests extracted from the literature. The whole fatigue failure prediction framework is finally applied to a real structure subjected to variable amplitude loadings.</div></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"82 ","pages":"Article 103851"},"PeriodicalIF":3.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In practical engineering problems, scenarios frequently emerge where random parameters follow multimodal probability distributions. Traditional time-variant uncertainty propagation methods, originally designed for unimodal distributions, risk incurring significant inaccuracies when applied to such multimodal cases. To address this challenge this paper introduces a time-variant uncertainty propagation analysis framework tailored for multimodal probability distributions. Initially, the time-variant response function is discretized into a series of instantaneous response functions. Subsequently, an improved point estimation method is employed to compute high-order statistical moments and correlation coefficients of these instantaneous responses. Following this, the maximum entropy method is used to reconstruct the probability density function of each instantaneous response function from its derived statistical moments. The highest order of statistical moments is adaptively determined through entropy-based criteria to balance computational efficiency and accuracy. Ultimately, the validity and effectiveness of the proposed framework are demonstrated through three examples.
{"title":"A time-variant uncertainty propagation analysis method for multimodal probability distributions","authors":"Boqun Xie , Xinpeng Wei , Qiang Gu , Chao Jiang , Jinwu Li","doi":"10.1016/j.probengmech.2025.103840","DOIUrl":"10.1016/j.probengmech.2025.103840","url":null,"abstract":"<div><div>In practical engineering problems, scenarios frequently emerge where random parameters follow multimodal probability distributions. Traditional time-variant uncertainty propagation methods, originally designed for unimodal distributions, risk incurring significant inaccuracies when applied to such multimodal cases. To address this challenge this paper introduces a time-variant uncertainty propagation analysis framework tailored for multimodal probability distributions. Initially, the time-variant response function is discretized into a series of instantaneous response functions. Subsequently, an improved point estimation method is employed to compute high-order statistical moments and correlation coefficients of these instantaneous responses. Following this, the maximum entropy method is used to reconstruct the probability density function of each instantaneous response function from its derived statistical moments. The highest order of statistical moments is adaptively determined through entropy-based criteria to balance computational efficiency and accuracy. Ultimately, the validity and effectiveness of the proposed framework are demonstrated through three examples.</div></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"82 ","pages":"Article 103840"},"PeriodicalIF":3.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-04DOI: 10.1016/j.probengmech.2025.103859
Yaswanth Sai Jetti , Martin Ostoja-Starzewski
We study the stress field near a crack tip in a random heterogeneous domain under remote anti-plane loading using tensor-valued random fields (TRFs). Specifically, we investigate interpenetrating phase composites (IPCs) and generate micromechanically consistent, statistically homogeneous and isotropic stiffness and compliance TRFs that capture the full spatial correlation structure. We focus on the mode III stress intensity factor (), which depends on the mesoscale size , phase contrast , and the specific realization of the material microstructure. For stiffness TRFs, the distribution of is approximately Gaussian with a mean exceeding the homogeneous value; the probability of exceedance is greater than 0.5, while the variance increases with while decreasing with increasing . For compliance TRFs, the distribution is positively skewed, yielding a more conservative response. Taken together, the stiffness- and compliance-based stochastic boundary value problems bound the true mechanical response and provide a practical range for estimating failure probabilities. These results demonstrate that mesoscale variability and phase contrast influence both the mean and scatter of , underscoring the limitations of homogenized medium assumptions for fracture assessment.
{"title":"Scale-dependent KIII in composites: A tensor random field approach","authors":"Yaswanth Sai Jetti , Martin Ostoja-Starzewski","doi":"10.1016/j.probengmech.2025.103859","DOIUrl":"10.1016/j.probengmech.2025.103859","url":null,"abstract":"<div><div>We study the stress field near a crack tip in a random heterogeneous domain under remote anti-plane loading using tensor-valued random fields (TRFs). Specifically, we investigate interpenetrating phase composites (IPCs) and generate micromechanically consistent, statistically homogeneous and isotropic stiffness and compliance TRFs that capture the full spatial correlation structure. We focus on the mode III stress intensity factor (<span><math><msub><mrow><mi>K</mi></mrow><mrow><mi>I</mi><mi>I</mi><mi>I</mi></mrow></msub></math></span>), which depends on the mesoscale size <span><math><mi>δ</mi></math></span>, phase contrast <span><math><mi>k</mi></math></span>, and the specific realization of the material microstructure. For stiffness TRFs, the distribution of <span><math><msub><mrow><mi>K</mi></mrow><mrow><mi>I</mi><mi>I</mi><mi>I</mi></mrow></msub></math></span> is approximately Gaussian with a mean exceeding the homogeneous value; the probability of exceedance is greater than 0.5, while the variance increases with <span><math><mi>k</mi></math></span> while decreasing with increasing <span><math><mi>δ</mi></math></span>. For compliance TRFs, the <span><math><msub><mrow><mi>K</mi></mrow><mrow><mi>I</mi><mi>I</mi><mi>I</mi></mrow></msub></math></span> distribution is positively skewed, yielding a more conservative response. Taken together, the stiffness- and compliance-based stochastic boundary value problems bound the true mechanical response and provide a practical range for estimating failure probabilities. These results demonstrate that mesoscale variability and phase contrast influence both the mean and scatter of <span><math><msub><mrow><mi>K</mi></mrow><mrow><mi>I</mi><mi>I</mi><mi>I</mi></mrow></msub></math></span>, underscoring the limitations of homogenized medium assumptions for fracture assessment.</div></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"82 ","pages":"Article 103859"},"PeriodicalIF":3.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-08DOI: 10.1016/j.probengmech.2025.103855
Aditya Sharma, Gordon P. Warn
A challenge with incorporating life-cycle criteria into design optimization is the need to quantify time-varying reliability of structural systems deteriorating in uncertain and non-stationary environments. This paper presents a computational methodology that addresses this challenge by combining set-based design with multi-fidelity modeling to broadly and efficiently explore a diverse set of design alternatives while systematically converging to the set of Pareto optimal designs. The result is a framework for multi-objective design optimization of structural systems based on probabilistic life-cycle criteria through a sequential decision process (SDP). At each decision state, design alternatives are evaluated and compared using bounds on decision criteria, and dominated (less-promising) designs are eliminated from further evaluation. Computational efficiency is achieved by sequencing models of increasingly higher fidelity. The SDP accommodates multiple objectives, discrete design variables, varying structural concepts, accounts for redundancy and system reliability, and the risk attitude of decision maker(s). The efficacy of the methodology is demonstrated through numerical examples involving multi-objective design optimization of steel trusses, where the goal is to identify optimal design variables that simultaneously minimize the expected value of the life-cycle cost and the corresponding risk of deviation from the expected value. By sequencing models of increasing fidelity, SDP is shown to efficiently converge to the set of Pareto optimal designs using 0.125–0.151 times the number of model evaluations in comparison to full evaluation by the highest fidelity model. Furthermore, the influence of structural configuration, material grade, and cross-sectional areas on tradeoffs among life-cycle costs is shown for Pareto optimal designs.
{"title":"Multi-objective design optimization of structural systems based on probabilistic life-cycle criteria through a sequential decision process","authors":"Aditya Sharma, Gordon P. Warn","doi":"10.1016/j.probengmech.2025.103855","DOIUrl":"10.1016/j.probengmech.2025.103855","url":null,"abstract":"<div><div>A challenge with incorporating life-cycle criteria into design optimization is the need to quantify time-varying reliability of structural systems deteriorating in uncertain and non-stationary environments. This paper presents a computational methodology that addresses this challenge by combining set-based design with multi-fidelity modeling to broadly and efficiently explore a diverse set of design alternatives while systematically converging to the set of Pareto optimal designs. The result is a framework for multi-objective design optimization of structural systems based on probabilistic life-cycle criteria through a sequential decision process (SDP). At each decision state, design alternatives are evaluated and compared using bounds on decision criteria, and dominated (less-promising) designs are eliminated from further evaluation. Computational efficiency is achieved by sequencing models of increasingly higher fidelity. The SDP accommodates multiple objectives, discrete design variables, varying structural concepts, accounts for redundancy and system reliability, and the risk attitude of decision maker(s). The efficacy of the methodology is demonstrated through numerical examples involving multi-objective design optimization of steel trusses, where the goal is to identify optimal design variables that simultaneously minimize the expected value of the life-cycle cost and the corresponding risk of deviation from the expected value. By sequencing models of increasing fidelity, SDP is shown to efficiently converge to the set of Pareto optimal designs using 0.125–0.151 times the number of model evaluations in comparison to full evaluation by the highest fidelity model. Furthermore, the influence of structural configuration, material grade, and cross-sectional areas on tradeoffs among life-cycle costs is shown for Pareto optimal designs.</div></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"82 ","pages":"Article 103855"},"PeriodicalIF":3.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-28DOI: 10.1016/j.probengmech.2025.103857
Siyi Du , Chunna Li , Yang Liu , Chunlin Gong , Weikai Gao
Throughout a rocket's lifecycle, numerous random uncertainties can significantly influence performance. However, existing uncertainty propagation (UP) methods for multidisciplinary systems often neglect correlations among field variables, leading to reduced accuracy. To overcome this limitation, we propose a multidisciplinary UP method that explicitly incorporates these correlations. For variables propagated from upper-level disciplines, the Nataf transformation is applied to generate correlated input samples for the current discipline, which then serve as the basis for uncertainty analysis. To accelerate the calculation of the probability density function of field variables within the Nataf transformation, we further introduce a warm-start strategy integrated with the maximum entropy method. In the case study of UP across multiple disciplines of a solid rocket system, using Monte Carlo simulation (MCS) as the benchmark, incorporating variable correlations yields notable improvements: the standard deviation accuracy of velocity and total energy at the first-stage separation point increased by 22.75 % and 32.57 %, respectively, while the accuracy of their lower bounds improved by 5.20 % and 4.20 %. These results demonstrate that the proposed method effectively addresses UP problems involving both numerical and field correlated variables, significantly enhancing the accuracy of UP.
{"title":"Multidisciplinary uncertainty propagation method considering correlated field variables for rocket systems","authors":"Siyi Du , Chunna Li , Yang Liu , Chunlin Gong , Weikai Gao","doi":"10.1016/j.probengmech.2025.103857","DOIUrl":"10.1016/j.probengmech.2025.103857","url":null,"abstract":"<div><div>Throughout a rocket's lifecycle, numerous random uncertainties can significantly influence performance. However, existing uncertainty propagation (UP) methods for multidisciplinary systems often neglect correlations among field variables, leading to reduced accuracy. To overcome this limitation, we propose a multidisciplinary UP method that explicitly incorporates these correlations. For variables propagated from upper-level disciplines, the Nataf transformation is applied to generate correlated input samples for the current discipline, which then serve as the basis for uncertainty analysis. To accelerate the calculation of the probability density function of field variables within the Nataf transformation, we further introduce a warm-start strategy integrated with the maximum entropy method. In the case study of UP across multiple disciplines of a solid rocket system, using Monte Carlo simulation (MCS) as the benchmark, incorporating variable correlations yields notable improvements: the standard deviation accuracy of velocity and total energy at the first-stage separation point increased by 22.75 % and 32.57 %, respectively, while the accuracy of their lower bounds improved by 5.20 % and 4.20 %. These results demonstrate that the proposed method effectively addresses UP problems involving both numerical and field correlated variables, significantly enhancing the accuracy of UP.</div></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"82 ","pages":"Article 103857"},"PeriodicalIF":3.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a unified framework for analyzing power systems subjected to both discrete and continuous random disturbances—a critical gap in existing literature that typically treats these disturbances separately. Unlike conventional approaches that focus on either continuous stochastic processes or discrete switching events in isolation, our novel methodology simultaneously captures both types of uncertainties within an integrated Markovian jump framework. The stochastic model of multi-machine power systems is formulated as a high-dimensional hybrid system and transformed into a quasi-Hamiltonian system with Markovian jump processes. A pioneering two-step approximation method is developed that first converts the hybrid system into a weighted-average model, then reduces it to a one-dimensional averaged Itô equation representing system energy dynamics. The approximate analytical solution of the corresponding Fokker-Planck-Kolmogorov (FPK) equation provides stationary response estimates for the original hybrid systems. A Lyapunov exponent approach is employed for asymptotic stability analysis with probability one. The methodology is validated through comprehensive analysis of Kundur's 4-machine 2-area system, demonstrating superior computational efficiency and analytical insights compared to traditional Monte Carlo simulations.
{"title":"Stochastic dynamics in power systems excited by discrete-continuous random disturbances","authors":"Rongchun Hu, Zheng Zeng, Sheng Zhou, Zhongliang Xie, Xudong Gu","doi":"10.1016/j.probengmech.2025.103858","DOIUrl":"10.1016/j.probengmech.2025.103858","url":null,"abstract":"<div><div>This paper presents a unified framework for analyzing power systems subjected to both discrete and continuous random disturbances—a critical gap in existing literature that typically treats these disturbances separately. Unlike conventional approaches that focus on either continuous stochastic processes or discrete switching events in isolation, our novel methodology simultaneously captures both types of uncertainties within an integrated Markovian jump framework. The stochastic model of multi-machine power systems is formulated as a high-dimensional hybrid system and transformed into a quasi-Hamiltonian system with Markovian jump processes. A pioneering two-step approximation method is developed that first converts the hybrid system into a weighted-average model, then reduces it to a one-dimensional averaged Itô equation representing system energy dynamics. The approximate analytical solution of the corresponding Fokker-Planck-Kolmogorov (FPK) equation provides stationary response estimates for the original hybrid systems. A Lyapunov exponent approach is employed for asymptotic stability analysis with probability one. The methodology is validated through comprehensive analysis of Kundur's 4-machine 2-area system, demonstrating superior computational efficiency and analytical insights compared to traditional Monte Carlo simulations.</div></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"82 ","pages":"Article 103858"},"PeriodicalIF":3.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-11-21DOI: 10.1016/j.probengmech.2025.103869
Meng Wang , Xiangling Gao , Chao-Lie Ning
The probabilistic seismic hazard analysis (PSHA) is a widely used framework to assess seismic hazard of a given site. Despite its wide usage, there are some limitations, particularly in quantifying the epistemic uncertainty through the traditional methods, i.e., the logic tree method, the ensemble model and the Monte Carlo (MC) simulation. These methods cannot accurately or efficiently capture the probability density functions (PDFs) of earthquake intensity measures (IMs). To address this problem, a novel method was proposed in this study by introducing the probability density evolution method into the PSHA to quantify the epistemic uncertainty. Different from the traditional methods, the proposed method treats the epistemic uncertainty as basic random variables within a physical stochastic system. Then, the generalized F-Discrepancy method is adopted to select the representative samples from the complete probability space formed by the basic random variables. Each representative sample refers to an alternative model of the PSHA with an assigned probability, predicting earthquake IMs at a prescribed annual exceedance rate through the classical formula. Furthermore, the generalized density evolution equation (GDEE) is employed for all representative samples to compute the PDFs of earthquake IMs. To demonstrate the advantage of the proposed method, the PDF of peak ground acceleration (PGA) and elastic spectral acceleration at various vibration periods, i.e., Sa is computed for a hypothetical site in Shanghai, China. For comparison, the corresponding PGA and Sa predicted by the logic tree method, the ensemble model and the MC simulation are computed. The investigations indicated that the proposed method can estimate the PDF of earthquake IMs at each annual exceedance rate accurately and efficiently. The PDFs have multimodal distributions, which cannot be well captured by the logic tree method or the ensemble model. Despite the MC simulation being capable of describing multimodal distribution characteristics, the proposed method requires fewer alternative models, thus reducing the computational cost greatly. Therefore, quantifying the epistemic uncertainty of the PSHA by the PDEM facilitates the uncertainty quantification in regional seismic risk analysis.
{"title":"Quantification of epistemic uncertainty for probabilistic seismic hazard analysis based on probability density evolution method","authors":"Meng Wang , Xiangling Gao , Chao-Lie Ning","doi":"10.1016/j.probengmech.2025.103869","DOIUrl":"10.1016/j.probengmech.2025.103869","url":null,"abstract":"<div><div>The probabilistic seismic hazard analysis (PSHA) is a widely used framework to assess seismic hazard of a given site. Despite its wide usage, there are some limitations, particularly in quantifying the epistemic uncertainty through the traditional methods, i.e., the logic tree method, the ensemble model and the Monte Carlo (MC) simulation. These methods cannot accurately or efficiently capture the probability density functions (PDFs) of earthquake intensity measures (IMs). To address this problem, a novel method was proposed in this study by introducing the probability density evolution method into the PSHA to quantify the epistemic uncertainty. Different from the traditional methods, the proposed method treats the epistemic uncertainty as basic random variables within a physical stochastic system. Then, the generalized F-Discrepancy method is adopted to select the representative samples from the complete probability space formed by the basic random variables. Each representative sample refers to an alternative model of the PSHA with an assigned probability, predicting earthquake IMs at a prescribed annual exceedance rate through the classical formula. Furthermore, the generalized density evolution equation (GDEE) is employed for all representative samples to compute the PDFs of earthquake IMs. To demonstrate the advantage of the proposed method, the PDF of peak ground acceleration (PGA) and elastic spectral acceleration at various vibration periods, i.e., Sa is computed for a hypothetical site in Shanghai, China. For comparison, the corresponding PGA and Sa predicted by the logic tree method, the ensemble model and the MC simulation are computed. The investigations indicated that the proposed method can estimate the PDF of earthquake IMs at each annual exceedance rate accurately and efficiently. The PDFs have multimodal distributions, which cannot be well captured by the logic tree method or the ensemble model. Despite the MC simulation <strong>being</strong> capable of describing multimodal distribution characteristics, the proposed method requires fewer alternative models, thus reducing the computational cost greatly. Therefore, quantifying the epistemic uncertainty of the PSHA by the PDEM facilitates the uncertainty quantification in regional seismic risk analysis.</div></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"82 ","pages":"Article 103869"},"PeriodicalIF":3.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145617878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-09-17DOI: 10.1016/j.probengmech.2025.103848
Wanying Yun , Fengyuan Li , Yue Pan , Hongfeng Zhang
Global reliability sensitivity analysis plays a critical role in identifying both important and unimportant variables affecting reliability, thus providing guidance for the simplification of reliability-based design optimization. Developing an efficient algorithm for estimating global reliability sensitivity indices is essential for the practical application of this theory in engineering contexts. This paper proposes an effective algorithm leveraging a metamodel-based importance sampling method combined with an adaptive Kriging model and a new single-loop estimation formula. Firstly, global reliability sensitivity analysis is equivalently transformed into an unconditional failure probability analysis and a two failure modes-based parallel failure probability analysis, utilizing the new single-loop estimation formula. Secondly, by sequentially constructing the importance sampling probability density functions for the variables within the global reliability sensitivity indices, both the unconditional failure probability and the two failure modes-based parallel failure probability can be efficiently estimated through the integrated metamodel-based importance sampling approach with the adaptive Kriging method. Finally, the efficiency and accuracy of the proposed method are methodically validated through analyzing a numerical analysis of a roof truss structure and a finite element model-based turbine shaft engineering structure.
{"title":"A sequential metamodel-based importance sampling coupled with adaptive Kriging model method for efficiently estimating the global reliability sensitivity indices","authors":"Wanying Yun , Fengyuan Li , Yue Pan , Hongfeng Zhang","doi":"10.1016/j.probengmech.2025.103848","DOIUrl":"10.1016/j.probengmech.2025.103848","url":null,"abstract":"<div><div>Global reliability sensitivity analysis plays a critical role in identifying both important and unimportant variables affecting reliability, thus providing guidance for the simplification of reliability-based design optimization. Developing an efficient algorithm for estimating global reliability sensitivity indices is essential for the practical application of this theory in engineering contexts. This paper proposes an effective algorithm leveraging a metamodel-based importance sampling method combined with an adaptive Kriging model and a new single-loop estimation formula. Firstly, global reliability sensitivity analysis is equivalently transformed into an unconditional failure probability analysis and a two failure modes-based parallel failure probability analysis, utilizing the new single-loop estimation formula. Secondly, by sequentially constructing the importance sampling probability density functions for the variables within the global reliability sensitivity indices, both the unconditional failure probability and the two failure modes-based parallel failure probability can be efficiently estimated through the integrated metamodel-based importance sampling approach with the adaptive Kriging method. Finally, the efficiency and accuracy of the proposed method are methodically validated through analyzing a numerical analysis of a roof truss structure and a finite element model-based turbine shaft engineering structure.</div></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"82 ","pages":"Article 103848"},"PeriodicalIF":3.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145109617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Subsea control systems are crucial for ensuring the safe and stable operation of subsea oil and gas production. However, traditional reliability assessment methods face challenges in handling uncertain or incomplete fault data in deep-sea environments. In this study, an integrated approach combining Fuzzy Fault Tree Analysis (FFTA) and Bayesian Network (BN) is proposed to improve the reliability assessment of subsea control systems under uncertainty. Firstly, the fault tree model with ‘subsea control systems failure’ as the primary event is constructed and 42 basic events are identified as contributing factors. To address the lack of precise failure data, fuzzy set theory is applied to estimate failure probabilities at different confidence levels (denoted by λ) to represent varying degrees of certainty. When λ = 1, the failure probability is calculated as 0.0003904, while when λ = 0, the failure probability falls within the fuzzy interval [0.1121 × 10−3, 0.6334 × 10−3]. Subsequently, the Bayesian probabilistic prediction model is constructed based on uncertain data and small sample conditions, enabling the determination of the systems expected reliability value. Finally, the corresponding Bayesian network model is constructed based on the fault tree analysis outcomes to further enhance the reliability assessment of subsea control systems. The quantitative analysis is performed under the condition of λ = 1, and the systems failure probability is calculated as 0.00038979759, which is highly consistent with the calculated value of the fault tree analysis. Subsequently, reverse diagnostic inference is performed to obtain the posterior probability of the root node. However, relying solely on posterior probability for diagnosis may lack reliability. To enhance diagnostic accuracy, integrating probabilistic importance, critical importance and sensitivity analyses is essential to pinpoint the primary factors influencing system failure. Various diagnostic metrics consistently highlight nodes BF39 (Sand sensor fault), BF26 (Subsea control module optical fiber coupler fault) and BF19 (Subsea allocation device jumper fault) as system vulnerabilities. These findings validate the method's efficacy and establish a theoretical basis for risk-informed decision-making in subsea oil and gas systems reliability management.
{"title":"Reliability analysis of subsea control systems based on FFTA and Bayesian network","authors":"Chuankun Zhou , Jian Liu , Zihao Jiao , Guangfei Zhang , Yuqing Chen","doi":"10.1016/j.probengmech.2025.103831","DOIUrl":"10.1016/j.probengmech.2025.103831","url":null,"abstract":"<div><div>Subsea control systems are crucial for ensuring the safe and stable operation of subsea oil and gas production. However, traditional reliability assessment methods face challenges in handling uncertain or incomplete fault data in deep-sea environments. In this study, an integrated approach combining Fuzzy Fault Tree Analysis (FFTA) and Bayesian Network (BN) is proposed to improve the reliability assessment of subsea control systems under uncertainty. Firstly, the fault tree model with ‘subsea control systems failure’ as the primary event is constructed and 42 basic events are identified as contributing factors. To address the lack of precise failure data, fuzzy set theory is applied to estimate failure probabilities at different confidence levels (denoted by λ) to represent varying degrees of certainty. When λ = 1, the failure probability is calculated as 0.0003904, while when λ = 0, the failure probability falls within the fuzzy interval [0.1121 × 10<sup>−3</sup>, 0.6334 × 10<sup>−3</sup>]. Subsequently, the Bayesian probabilistic prediction model is constructed based on uncertain data and small sample conditions, enabling the determination of the systems expected reliability value. Finally, the corresponding Bayesian network model is constructed based on the fault tree analysis outcomes to further enhance the reliability assessment of subsea control systems. The quantitative analysis is performed under the condition of λ = 1, and the systems failure probability is calculated as 0.00038979759, which is highly consistent with the calculated value of the fault tree analysis. Subsequently, reverse diagnostic inference is performed to obtain the posterior probability of the root node. However, relying solely on posterior probability for diagnosis may lack reliability. To enhance diagnostic accuracy, integrating probabilistic importance, critical importance and sensitivity analyses is essential to pinpoint the primary factors influencing system failure. Various diagnostic metrics consistently highlight nodes BF39 (Sand sensor fault), BF26 (Subsea control module optical fiber coupler fault) and BF19 (Subsea allocation device jumper fault) as system vulnerabilities. These findings validate the method's efficacy and establish a theoretical basis for risk-informed decision-making in subsea oil and gas systems reliability management.</div></div>","PeriodicalId":54583,"journal":{"name":"Probabilistic Engineering Mechanics","volume":"81 ","pages":"Article 103831"},"PeriodicalIF":3.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144925743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}