Pub Date : 2026-07-01Epub Date: 2026-01-10DOI: 10.1016/j.ress.2026.112215
He Yi , Narayanaswamy Balakrishnan , Xiang Li
In the context of consecutive k-type systems, multi-state system models are only considered in the one-dimensional case and not in the two-dimensional case due to the complexity involved. In this paper, we consider several linear two-dimensional consecutive k-type systems in the multi-state case for the first time, as generalization of consecutive k-out-of-n systems and l-consecutive-k-out-of-n systems without/with overlapping. These systems include multi-state linear connected-(k, r)-out-of-(m, n): G systems, multi-state linear connected-(k, r)-or-(r, k)-out-of-(m, n): G systems, multi-state linear l-connected-(k, r)-out-of-(m, n): G systems without/with overlapping, and multi-state linear l-connected-(k, r)-or-(r, k)-out-of-(m, n): G systems without/with overlapping. We then derive their reliability functions by using the finite Markov chain imbedding approach (FMCIA) in a new way. We also present several examples to illustrate all the results developed here.
{"title":"Linear two-dimensional consecutive k-type systems in multi-state case","authors":"He Yi , Narayanaswamy Balakrishnan , Xiang Li","doi":"10.1016/j.ress.2026.112215","DOIUrl":"10.1016/j.ress.2026.112215","url":null,"abstract":"<div><div>In the context of consecutive <em>k</em>-type systems, multi-state system models are only considered in the one-dimensional case and not in the two-dimensional case due to the complexity involved. In this paper, we consider several linear two-dimensional consecutive <em>k</em>-type systems in the multi-state case for the first time, as generalization of consecutive <em>k</em>-out-of-<em>n</em> systems and <em>l</em>-consecutive-<em>k</em>-out-of-<em>n</em> systems without/with overlapping. These systems include multi-state linear connected-(<strong><em>k</em>, <em>r</em></strong>)-out-of-(<em>m, n</em>): G systems, multi-state linear connected-(<strong><em>k</em>, <em>r</em></strong>)-or-(<strong><em>r</em>, <em>k</em></strong>)-out-of-(<em>m, n</em>): G systems, multi-state linear <strong><em>l</em></strong>-connected-(<strong><em>k</em>, <em>r</em></strong>)-out-of-(<em>m, n</em>): G systems without/with overlapping, and multi-state linear <strong><em>l</em></strong>-connected-(<strong><em>k</em>, <em>r</em></strong>)-or-(<strong><em>r</em>, <em>k</em></strong>)-out-of-(<em>m, n</em>): G systems without/with overlapping. We then derive their reliability functions by using the finite Markov chain imbedding approach (FMCIA) in a new way. We also present several examples to illustrate all the results developed here.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"271 ","pages":"Article 112215"},"PeriodicalIF":11.0,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wildfire represents a significant natural hazard with the potential to disrupt the off-site power system (OPS) of nuclear power plants (NPPs). Their frequency and intensity are expected to increase due to climate change. The loss of OPS resulting from wildfires can critically affect the safety and operational stability of NPPs, highlighting the need for comprehensive risk assessment. This study compares the results of wildfire risk assessments based on conventional, generalized input data derived from regional statistics with those based on detailed data that incorporate human activity characteristics to improve assessment precision. The proposed methodology is applied to the Kori NPP site, considering the wildfire occurrence frequency at the local administrative level, adjustments based on site accessibility, and regional statistics on wildfire duration. Findings from the Kori NPP case study indicate that incorporating these detailed factors can substantially change the estimated annual probability of loss of the OPS; in this case study, the estimate decreased by up to 95% relative to the baseline. Among all factors, regional variation in wildfire frequency was identified as the most influential parameter. This finding emphasizes the importance of spatially specific input data in enhancing the reliability of wildfire risk assessments. The proposed approach helps avoid both overestimation and underestimation of risk, offering practical insights for developing operational strategies and safety policies for NPPs through localized and accurate analysis.
{"title":"Wildfire risk assessment of nuclear power plant off-site power systems using human-activity–informed localized inputs: A case study of the Kori nuclear power plant","authors":"Choi Yeonwoo , Eem Seunghyun , Kwag Shinyoung , Park Jinhee","doi":"10.1016/j.ress.2026.112289","DOIUrl":"10.1016/j.ress.2026.112289","url":null,"abstract":"<div><div>Wildfire represents a significant natural hazard with the potential to disrupt the off-site power system (OPS) of nuclear power plants (NPPs). Their frequency and intensity are expected to increase due to climate change. The loss of OPS resulting from wildfires can critically affect the safety and operational stability of NPPs, highlighting the need for comprehensive risk assessment. This study compares the results of wildfire risk assessments based on conventional, generalized input data derived from regional statistics with those based on detailed data that incorporate human activity characteristics to improve assessment precision. The proposed methodology is applied to the Kori NPP site, considering the wildfire occurrence frequency at the local administrative level, adjustments based on site accessibility, and regional statistics on wildfire duration. Findings from the Kori NPP case study indicate that incorporating these detailed factors can substantially change the estimated annual probability of loss of the OPS; in this case study, the estimate decreased by up to 95% relative to the baseline. Among all factors, regional variation in wildfire frequency was identified as the most influential parameter. This finding emphasizes the importance of spatially specific input data in enhancing the reliability of wildfire risk assessments. The proposed approach helps avoid both overestimation and underestimation of risk, offering practical insights for developing operational strategies and safety policies for NPPs through localized and accurate analysis.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"271 ","pages":"Article 112289"},"PeriodicalIF":11.0,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2025-11-25DOI: 10.1016/j.ress.2025.112019
Ning Pan , Manu Sasidharan , Sho Okazaki , Manuel Herrera , Ajith Kumar Parlikad
Effective prediction of infrastructure performance is essential for informed asset management. However, traditional approaches often treat different types of assets in isolation, overlooking critical interdependencies (such as those between track and drainage systems) that significantly influence asset degradation and risk. This paper proposes a hybrid model, BaGTA, that is temporally aware, spatially informed and probabilistically grounded to predict railway track performance while accounting for both uncertainty and inter-asset dependencies. The model was trained and validated on a dataset comprising 6072 track segments and 31,628 drainage assets across four UK railway routes. We demonstrate that incorporating track-drainage interdependencies improves prediction accuracy in both classification and regression tasks. Specifically, the inclusion of interdependencies reduced the prediction error for the Vertical Settlement Standard Deviation (VSD), which is a key indicator of track performance, by 24.65 %. The proposed method not only captures complex spatiotemporal relationships but also quantifies uncertainty in predictions, offering a robust decision-support tool for infrastructure operators. This approach has the potential to transform maintenance strategies by enabling proactive, risk-informed, and cost-effective asset management.
{"title":"Railway track performance prediction considering track-drainage interdependencies","authors":"Ning Pan , Manu Sasidharan , Sho Okazaki , Manuel Herrera , Ajith Kumar Parlikad","doi":"10.1016/j.ress.2025.112019","DOIUrl":"10.1016/j.ress.2025.112019","url":null,"abstract":"<div><div>Effective prediction of infrastructure performance is essential for informed asset management. However, traditional approaches often treat different types of assets in isolation, overlooking critical interdependencies (such as those between track and drainage systems) that significantly influence asset degradation and risk. This paper proposes a hybrid model, BaGTA, that is temporally aware, spatially informed and probabilistically grounded to predict railway track performance while accounting for both uncertainty and inter-asset dependencies. The model was trained and validated on a dataset comprising 6072 track segments and 31,628 drainage assets across four UK railway routes. We demonstrate that incorporating track-drainage interdependencies improves prediction accuracy in both classification and regression tasks. Specifically, the inclusion of interdependencies reduced the prediction error for the Vertical Settlement Standard Deviation (VSD), which is a key indicator of track performance, by 24.65 %. The proposed method not only captures complex spatiotemporal relationships but also quantifies uncertainty in predictions, offering a robust decision-support tool for infrastructure operators. This approach has the potential to transform maintenance strategies by enabling proactive, risk-informed, and cost-effective asset management.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"271 ","pages":"Article 112019"},"PeriodicalIF":11.0,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-23DOI: 10.1016/j.ress.2026.112271
Jiusi Zhang , Chunxiao Wang , Quan Qian , Shen Yin
As the complexity of industrial equipment continues to increase, determining the remaining useful life (RUL) with high precision holds substantial significance for maintaining intricate industrial systems. The development of cross-domain prognostic approaches without source domain data necessitates thorough investigation, given the inherent distribution shifts among edge devices’ degradation patterns and the imperative of preserving data security protocols. Furthermore, convolutional neural network, and long short-term memory network perform insufficiently when processing complex structurally dependent data. Consequently, this paper proposes a distributed RUL prediction approach based on graph convolutional neural network. Specifically, this paper designs a differential attention graph convolutional neural network that can focus on key areas in degradation data. Furthermore, considering the privacy and security of degradation data, this paper designs a two-stage decision boundary adjustment approach to achieve source-free RUL prediction under cross-domain conditions. On this basis, the study introduces a federated consensus mechanism that implements progressive weight calibration aligned with distributed training dynamics in edge computing environments, which can effectively reduce overfitting, and improve the generalization ability. Experimental validation on NASA’s publicly available aircraft engine degradation dataset confirms the operational efficacy of the proposed approach.
{"title":"Source-free domain adaptation for cross-domain remaining useful life prediction: A distributed federated learning perspective","authors":"Jiusi Zhang , Chunxiao Wang , Quan Qian , Shen Yin","doi":"10.1016/j.ress.2026.112271","DOIUrl":"10.1016/j.ress.2026.112271","url":null,"abstract":"<div><div>As the complexity of industrial equipment continues to increase, determining the remaining useful life (RUL) with high precision holds substantial significance for maintaining intricate industrial systems. The development of cross-domain prognostic approaches without source domain data necessitates thorough investigation, given the inherent distribution shifts among edge devices’ degradation patterns and the imperative of preserving data security protocols. Furthermore, convolutional neural network, and long short-term memory network perform insufficiently when processing complex structurally dependent data. Consequently, this paper proposes a distributed RUL prediction approach based on graph convolutional neural network. Specifically, this paper designs a differential attention graph convolutional neural network that can focus on key areas in degradation data. Furthermore, considering the privacy and security of degradation data, this paper designs a two-stage decision boundary adjustment approach to achieve source-free RUL prediction under cross-domain conditions. On this basis, the study introduces a federated consensus mechanism that implements progressive weight calibration aligned with distributed training dynamics in edge computing environments, which can effectively reduce overfitting, and improve the generalization ability. Experimental validation on NASA’s publicly available aircraft engine degradation dataset confirms the operational efficacy of the proposed approach.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"271 ","pages":"Article 112271"},"PeriodicalIF":11.0,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-22DOI: 10.1016/j.ress.2026.112269
Quan Qian , Jianghong Zhou , Bingchang Hou , Jie Wang , Hanmin Sheng , Jiusi Zhang
Numerous remaining useful life transfer prediction methods have been proposed to handle the issues of domain shift and knowledge transfer. However, the effectiveness of almost all these methods relies on the assumption that the sample dimensions of the source and target domains are equal. In practice, owing to differences in operating speeds and fault types, such a consistency assumption inevitably creates degradation information asymmetry between the two domains, thereby resulting in distorted measurement of intrinsic cross-domain data distribution. To bridge this gap, this study develops a new feature distribution adaptation method named dimension-mismatched adversarial network (DMAN) to offer a new modeling paradigm. In DMAN, a dimension selection rule based on the Nyquist sampling theorem and frequency resolution is established, enabling the distribution alignment to concentrate on genuine data bias caused by variations in operating conditions. An adaptive empirical mutual information calculator is designed to accurately assess the similarity of data distribution for both domains. On this basis, an adversarial training mechanism is proposed to learn domain-invariant intrinsic degradation features and achieve domain confusion. Experimental results on XJTU-SY and IEEE PHM 2012 Challenge datasets demonstrate the superiority of DMAN over several state-of-the-art approaches.
{"title":"Dimension-mismatched adversarial network: a new feature distribution adaptation method for rolling bearing RUL prediction","authors":"Quan Qian , Jianghong Zhou , Bingchang Hou , Jie Wang , Hanmin Sheng , Jiusi Zhang","doi":"10.1016/j.ress.2026.112269","DOIUrl":"10.1016/j.ress.2026.112269","url":null,"abstract":"<div><div>Numerous remaining useful life transfer prediction methods have been proposed to handle the issues of domain shift and knowledge transfer. However, the effectiveness of almost all these methods relies on the assumption that the sample dimensions of the source and target domains are equal. In practice, owing to differences in operating speeds and fault types, such a consistency assumption inevitably creates degradation information asymmetry between the two domains, thereby resulting in distorted measurement of intrinsic cross-domain data distribution. To bridge this gap, this study develops a new feature distribution adaptation method named dimension-mismatched adversarial network (DMAN) to offer a new modeling paradigm. In DMAN, a dimension selection rule based on the Nyquist sampling theorem and frequency resolution is established, enabling the distribution alignment to concentrate on genuine data bias caused by variations in operating conditions. An adaptive empirical mutual information calculator is designed to accurately assess the similarity of data distribution for both domains. On this basis, an adversarial training mechanism is proposed to learn domain-invariant intrinsic degradation features and achieve domain confusion. Experimental results on XJTU-SY and IEEE PHM 2012 Challenge datasets demonstrate the superiority of DMAN over several state-of-the-art approaches.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"271 ","pages":"Article 112269"},"PeriodicalIF":11.0,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-07DOI: 10.1016/j.ress.2026.112212
Song Ding , Lunhu Hu , Xing Pan , Jiacheng Liu , Fu Guo
Over-trust in automated driving systems (ADS) can trigger severe accidents, whereas under-trust may reduce system acceptance and efficiency. Thus, assessing risk uncertainty is critical for ensuring driving safety and enhancing system performance. This study aims to develop a cognitive model–based framework for risk uncertainty assessment in human-ADS cooperative driving, enabling precise tracking of the evolving risks of over-trust and under-trust. We propose a drift-diffusion model (DDM)–based risk uncertainty assessment approach applicable across diverse driving task scenarios. A driving simulation experiment was conducted with three levels of ADS reliability and five levels of task difficulty, yielding 7200 behavioral observations for model fitting and validation. The hierarchical Bayesian DDM demonstrated strong predictive performance, with simulated distributions closely matching experimental data. Results reveal that higher ADS reliability significantly shortens trust decision time, while the impact of task difficulty is non-monotonic. More importantly, the model successfully quantifies the time-varying risk uncertainty of over-trust and under-trust. These findings highlight the proposed framework as an effective and interpretable tool for evaluating time-varying risk uncertainty in human-ADS cooperation, providing a crucial model foundation for the future development of real-time risk prediction and intervention systems.
{"title":"Probabilistic risk uncertainty assessment for driver over-trust and under-trust in Level 3 human-automated driving systems cooperative driving based on the drift-diffusion model","authors":"Song Ding , Lunhu Hu , Xing Pan , Jiacheng Liu , Fu Guo","doi":"10.1016/j.ress.2026.112212","DOIUrl":"10.1016/j.ress.2026.112212","url":null,"abstract":"<div><div>Over-trust in automated driving systems (ADS) can trigger severe accidents, whereas under-trust may reduce system acceptance and efficiency. Thus, assessing risk uncertainty is critical for ensuring driving safety and enhancing system performance. This study aims to develop a cognitive model–based framework for risk uncertainty assessment in human-ADS cooperative driving, enabling precise tracking of the evolving risks of over-trust and under-trust. We propose a drift-diffusion model (DDM)–based risk uncertainty assessment approach applicable across diverse driving task scenarios. A driving simulation experiment was conducted with three levels of ADS reliability and five levels of task difficulty, yielding 7200 behavioral observations for model fitting and validation. The hierarchical Bayesian DDM demonstrated strong predictive performance, with simulated distributions closely matching experimental data. Results reveal that higher ADS reliability significantly shortens trust decision time, while the impact of task difficulty is non-monotonic. More importantly, the model successfully quantifies the time-varying risk uncertainty of over-trust and under-trust. These findings highlight the proposed framework as an effective and interpretable tool for evaluating time-varying risk uncertainty in human-ADS cooperation, providing a crucial model foundation for the future development of real-time risk prediction and intervention systems.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"271 ","pages":"Article 112212"},"PeriodicalIF":11.0,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2025-12-30DOI: 10.1016/j.ress.2025.112168
Ningjie Li , Xinli Hu , Jinsong Huang , Michael Beer , Hongchao Zheng
Adaptive metamodels using single learning functions fail to consistently maintain the high accuracy or efficiency in calculating failure probabilities of slopes under all scenarios (e.g., natural vs. reservoir slopes) due to the No Free Lunch theorem. It poses a challenge in the adaptive selection of the optimal function under high-dimensional random field domains. To this end, we consider the selection problem as a multi-armed bandit problem, and thus propose a portfolio optimization-based adaptive polynomial-chaos Kriging (POPCK) method that dynamically balances exploration and exploitation of six distinct learning functions, thereby adaptively selecting better functions based on their historical performance. This adaptive selection could be performed under high-dimensional variables by incorporating Karhunen–Loève expansion and sliced inverse regression techniques into POPCK. The feasibility of the proposed method is demonstrated through four classic examples (involving natural, four soil layers, rainfall infiltration, and water level drawdown conditions). Results show that the proposed method exhibits good robustness for all examples, high accuracy (ranking 1st) and computational efficiency (ranking 2nd), whereas the performance of PCKs using the single learning functions fluctuates greatly. This method effectively mitigates the randomness of learning function selection, which is valuable for engineers who lack prior knowledge of optimal learning functions.
{"title":"Adaptive portfolio optimization-based metamodel method for the multi-armed bandit problem of learning function in slope reliability analysis","authors":"Ningjie Li , Xinli Hu , Jinsong Huang , Michael Beer , Hongchao Zheng","doi":"10.1016/j.ress.2025.112168","DOIUrl":"10.1016/j.ress.2025.112168","url":null,"abstract":"<div><div>Adaptive metamodels using single learning functions fail to consistently maintain the high accuracy or efficiency in calculating failure probabilities of slopes under all scenarios (e.g., natural vs. reservoir slopes) due to the No Free Lunch theorem. It poses a challenge in the adaptive selection of the optimal function under high-dimensional random field domains. To this end, we consider the selection problem as a multi-armed bandit problem, and thus propose a portfolio optimization-based adaptive polynomial-chaos Kriging (POPCK) method that dynamically balances exploration and exploitation of six distinct learning functions, thereby adaptively selecting better functions based on their historical performance. This adaptive selection could be performed under high-dimensional variables by incorporating Karhunen–Loève expansion and sliced inverse regression techniques into POPCK. The feasibility of the proposed method is demonstrated through four classic examples (involving natural, four soil layers, rainfall infiltration, and water level drawdown conditions). Results show that the proposed method exhibits good robustness for all examples, high accuracy (ranking 1st) and computational efficiency (ranking 2nd), whereas the performance of PCKs using the single learning functions fluctuates greatly. This method effectively mitigates the randomness of learning function selection, which is valuable for engineers who lack prior knowledge of optimal learning functions.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"271 ","pages":"Article 112168"},"PeriodicalIF":11.0,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-13DOI: 10.1016/j.ress.2026.112233
Hongqiang Li , Xiangkun Meng , Wenjun Zhang , Xiang-Yu Zhou , Xue Yang
With the rapid development of maritime autonomous surface ships (MASS), the reliability of onboard mechanical equipment has become increasingly critical for safe and efficient operation. Motivated by these emerging requirements, this study proposes a real-time reliability assessment framework for marine mechanical equipment that integrates data-driven models with physical knowledge. By combining physical knowledge with a Wasserstein generative adversarial network (WGAN) to construct a synthetic dataset, the HI is predicted using principal component analysis and a long short-term memory network (PCA-LSTM) model, and the prediction results are optimized using Savitzky-Golay filtering. Finally, real-time reliability quantification is achieved based on the Weibull distribution and maximum likelihood estimation. The case study of a ship propulsion system demonstrates that this method can identify the accelerating trend of reliability reduction after approximately 400 h of operation, and the reliability remains at 99.36% until 720 h. The capability of ML to predict real-time reliability, combined with physical knowledge, reflects real-world conditions. The results provide real-time predictions of the health state and reliability of mechanical equipment, enabling early fault detection and suggesting the formulation of maintenance planning, thereby supporting the reliable operation of MASS.
{"title":"A real-time reliability assessment framework for marine mechanical equipment integrating machine learning and physical knowledge: Toward applications in maritime autonomous surface ships","authors":"Hongqiang Li , Xiangkun Meng , Wenjun Zhang , Xiang-Yu Zhou , Xue Yang","doi":"10.1016/j.ress.2026.112233","DOIUrl":"10.1016/j.ress.2026.112233","url":null,"abstract":"<div><div>With the rapid development of maritime autonomous surface ships (MASS), the reliability of onboard mechanical equipment has become increasingly critical for safe and efficient operation. Motivated by these emerging requirements, this study proposes a real-time reliability assessment framework for marine mechanical equipment that integrates data-driven models with physical knowledge. By combining physical knowledge with a Wasserstein generative adversarial network (WGAN) to construct a synthetic dataset, the HI is predicted using principal component analysis and a long short-term memory network (PCA-LSTM) model, and the prediction results are optimized using Savitzky-Golay filtering. Finally, real-time reliability quantification is achieved based on the Weibull distribution and maximum likelihood estimation. The case study of a ship propulsion system demonstrates that this method can identify the accelerating trend of reliability reduction after approximately 400 h of operation, and the reliability remains at 99.36% until 720 h. The capability of ML to predict real-time reliability, combined with physical knowledge, reflects real-world conditions. The results provide real-time predictions of the health state and reliability of mechanical equipment, enabling early fault detection and suggesting the formulation of maintenance planning, thereby supporting the reliable operation of MASS.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"271 ","pages":"Article 112233"},"PeriodicalIF":11.0,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-12DOI: 10.1016/j.ress.2026.112220
Fatao Zhang , Chi Zhang , Yanxia Chang
Post-disaster maintenance with drone inspection is crucial for enhancing the resilience of critical infrastructures. In this paper, we propose a novel stochastic dynamic programming model that integrates maintenance team scheduling with drone-based inspections by using repair vehicles as take-off and landing platforms (RVTLP) approach, so that drones can follow maintenance vehicles deep into disaster areas and dynamically update damage information. Our model explicitly considers travel time between infrastructure components and scenarios with multiple repair teams, aiming to maximize infrastructure resilience within a limited planning horizon. To deal with the computational complexity of our optimization model, we developed a customized approximate dynamic programming algorithm with unvisited-state approximation and limited-period storage and validated the algorithm's ability to solve large-scale problems. Finally, computational experiments under real-world scenarios reveal that drone inspection range, travel time, and the number of maintenance teams exert significant effects on the resilience of critical infrastructures, providing important insights into how the resilience evolves with these parameters.
{"title":"A customized approximate dynamic programming approach for the restoration optimization of disrupted infrastructures with drone inspection","authors":"Fatao Zhang , Chi Zhang , Yanxia Chang","doi":"10.1016/j.ress.2026.112220","DOIUrl":"10.1016/j.ress.2026.112220","url":null,"abstract":"<div><div>Post-disaster maintenance with drone inspection is crucial for enhancing the resilience of critical infrastructures. In this paper, we propose a novel stochastic dynamic programming model that integrates maintenance team scheduling with drone-based inspections by using repair vehicles as take-off and landing platforms (RVTLP) approach, so that drones can follow maintenance vehicles deep into disaster areas and dynamically update damage information. Our model explicitly considers travel time between infrastructure components and scenarios with multiple repair teams, aiming to maximize infrastructure resilience within a limited planning horizon. To deal with the computational complexity of our optimization model, we developed a customized approximate dynamic programming algorithm with unvisited-state approximation and limited-period storage and validated the algorithm's ability to solve large-scale problems. Finally, computational experiments under real-world scenarios reveal that drone inspection range, travel time, and the number of maintenance teams exert significant effects on the resilience of critical infrastructures, providing important insights into how the resilience evolves with these parameters.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"271 ","pages":"Article 112220"},"PeriodicalIF":11.0,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-07-01Epub Date: 2026-01-23DOI: 10.1016/j.ress.2026.112281
Qingwen Xiong , Xianbao Yuan , Sen Zhang , Jianjun Zhou , Zhangliang Mao , Yonghong Zhang
Model calibration is a technique that enhances computational accuracy by adjusting model inputs or structures, and can be categorized into probabilistic and non-probabilistic methods. In the field of nuclear reactors, limitations such as insufficient data, complex model structures, and numerous parameters often render probabilistic methods inapplicable in many scenarios. Meanwhile, non-probabilistic methods fail to account for model form uncertainty, making it difficult to accurately evaluate the confidence level and coverage. To address these challenges, a novel uncertainty informed calibration framework based on the non-probabilistic interval theory is proposed. The framework integrates techniques such as artificial neural networks, model uncertainty evaluation, double-loop nested sampling, and optimization algorithms, enabling the acquisition of non-probabilistic intervals for input parameters through inverse calibration. The proposed framework is validated using the critical flow model, and its reliability is verified by comparing the performance of multiple calibration methods. Subsequently, the framework is applied to the counter-current flow limitation model. The results demonstrate that the framework is suitable for inverse calibration even with limited observational data, as it accurately obtains input parameter intervals with a specific coverage rate (e.g., 95 %) while maintaining high computational efficiency.
{"title":"Uncertainty informed calibration of thermal-hydraulic models for nuclear reactor via integrated neural network and optimization algorithm framework","authors":"Qingwen Xiong , Xianbao Yuan , Sen Zhang , Jianjun Zhou , Zhangliang Mao , Yonghong Zhang","doi":"10.1016/j.ress.2026.112281","DOIUrl":"10.1016/j.ress.2026.112281","url":null,"abstract":"<div><div>Model calibration is a technique that enhances computational accuracy by adjusting model inputs or structures, and can be categorized into probabilistic and non-probabilistic methods. In the field of nuclear reactors, limitations such as insufficient data, complex model structures, and numerous parameters often render probabilistic methods inapplicable in many scenarios. Meanwhile, non-probabilistic methods fail to account for model form uncertainty, making it difficult to accurately evaluate the confidence level and coverage. To address these challenges, a novel uncertainty informed calibration framework based on the non-probabilistic interval theory is proposed. The framework integrates techniques such as artificial neural networks, model uncertainty evaluation, double-loop nested sampling, and optimization algorithms, enabling the acquisition of non-probabilistic intervals for input parameters through inverse calibration. The proposed framework is validated using the critical flow model, and its reliability is verified by comparing the performance of multiple calibration methods. Subsequently, the framework is applied to the counter-current flow limitation model. The results demonstrate that the framework is suitable for inverse calibration even with limited observational data, as it accurately obtains input parameter intervals with a specific coverage rate (e.g., 95 %) while maintaining high computational efficiency.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"271 ","pages":"Article 112281"},"PeriodicalIF":11.0,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}