In reliability engineering, it is frequently encountered that multiple performance characteristics (PCs) deteriorate simultaneously. The associated degradation processes are usually dependent and exhibit some heterogeneity from unit to unit, which makes the multivariate degradation modeling and reliability evaluation more challenging. To this end, we propose a new multivariate gamma process model. This model introduces a multivariate random vector, whose joint distribution is constructed by marginal gamma distributions and a copula function, to describe the unit‐to‐unit variability and the dependence among PCs. Meanwhile, it does not require all PCs to be inspected at the same time points in contrast to the traditional copula‐based degradation models. In addition, two reliability evaluation methods are developed. Model parameters are estimated by the stochastic expectation maximization algorithm, and a three‐step procedure is provided to initialize this algorithm. Subsequently, numerical simulations are implemented to verify the proposed methods. Finally, two examples are provided for illustration, and it is shown that the proposed model and methods scale well to the degradation data with different numbers of PCs. What is more, comparisons with several benchmark models are performed, and the superiority of the proposed model is well demonstrated.
在可靠性工程中,经常会遇到多种性能特征(PC)同时退化的情况。相关的劣化过程通常具有依赖性,并且在不同单元之间表现出一定的异质性,这使得多变量劣化建模和可靠性评估更具挑战性。为此,我们提出了一种新的多元伽马过程模型。该模型引入了一个多变量随机向量,其联合分布由边际伽马分布和 copula 函数构建,用于描述单元与单元之间的变异性和 PC 之间的依赖性。同时,与传统的基于 copula 的退化模型相比,它不要求在同一时间点对所有 PC 进行检测。此外,还开发了两种可靠性评估方法。模型参数通过随机期望最大化算法进行估计,并提供了一个三步程序来初始化该算法。随后,通过数值模拟来验证所提出的方法。最后,提供了两个示例进行说明,结果表明所提出的模型和方法能很好地扩展到不同 PC 数量的降解数据。此外,还与几个基准模型进行了比较,充分证明了所提模型的优越性。
{"title":"A new multivariate gamma process model for degradation analysis","authors":"Kai Song","doi":"10.1002/qre.3646","DOIUrl":"https://doi.org/10.1002/qre.3646","url":null,"abstract":"In reliability engineering, it is frequently encountered that multiple performance characteristics (PCs) deteriorate simultaneously. The associated degradation processes are usually dependent and exhibit some heterogeneity from unit to unit, which makes the multivariate degradation modeling and reliability evaluation more challenging. To this end, we propose a new multivariate gamma process model. This model introduces a multivariate random vector, whose joint distribution is constructed by marginal gamma distributions and a copula function, to describe the unit‐to‐unit variability and the dependence among PCs. Meanwhile, it does not require all PCs to be inspected at the same time points in contrast to the traditional copula‐based degradation models. In addition, two reliability evaluation methods are developed. Model parameters are estimated by the stochastic expectation maximization algorithm, and a three‐step procedure is provided to initialize this algorithm. Subsequently, numerical simulations are implemented to verify the proposed methods. Finally, two examples are provided for illustration, and it is shown that the proposed model and methods scale well to the degradation data with different numbers of PCs. What is more, comparisons with several benchmark models are performed, and the superiority of the proposed model is well demonstrated.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate prediction of the engine's remaining useful life (RUL) is essential to ensure the safe operation of the aircraft because. However, traditional deep‐learning based methods for RUL prediction has been limited by interpretability and adjustment for hyperparameters in practical applications due to the intricate potential relations during the degradation process. To address these dilemmas, an improved multi‐strategy tuna swarm optimization‐assisted graph convolutional neural network (IMTSO‐GCN) is developed to achieve RUL prediction in this work. Specifically, mutual information is used to describe potential relationships among measured parameters so that they could be utilized in building edges for these parameters. Besides, considering that not all relational nodes will positively affect the RUL prediction and the inherent hyperparameters of the GCN are high‐dimensional. Inspired by “No Free Lunch (NFL)”, IMTSO is proposed to reduce the optimization cost of hyperparameters, in which cycle chaotic mapping is employed to achieve initialization of the population for improving the uniformity of the initial population distribution. Besides, a novel adaptive approach is proposed to enhance the exploration and exploitation of tuna swarm optimization (TSO). The CMAPSS dataset was used to validate the effectiveness and advancedness of IMTSO‐GCN, and the experimental results show that the R2 of the IMTSO‐GCN is greater than 0.99, RMSE is less than 3, the Score error is within 1.
{"title":"Combined improved tuna swarm optimization with graph convolutional neural network for remaining useful life of engine","authors":"Yongliang Yuan, Qingkang Yang, Guohu Wang, Jianji Ren, Zhenxi Wang, Feng Qiu, Kunpeng Li, Haiqing Liu","doi":"10.1002/qre.3651","DOIUrl":"https://doi.org/10.1002/qre.3651","url":null,"abstract":"Accurate prediction of the engine's remaining useful life (RUL) is essential to ensure the safe operation of the aircraft because. However, traditional deep‐learning based methods for RUL prediction has been limited by interpretability and adjustment for hyperparameters in practical applications due to the intricate potential relations during the degradation process. To address these dilemmas, an improved multi‐strategy tuna swarm optimization‐assisted graph convolutional neural network (IMTSO‐GCN) is developed to achieve RUL prediction in this work. Specifically, mutual information is used to describe potential relationships among measured parameters so that they could be utilized in building edges for these parameters. Besides, considering that not all relational nodes will positively affect the RUL prediction and the inherent hyperparameters of the GCN are high‐dimensional. Inspired by “No Free Lunch (NFL)”, IMTSO is proposed to reduce the optimization cost of hyperparameters, in which cycle chaotic mapping is employed to achieve initialization of the population for improving the uniformity of the initial population distribution. Besides, a novel adaptive approach is proposed to enhance the exploration and exploitation of tuna swarm optimization (TSO). The CMAPSS dataset was used to validate the effectiveness and advancedness of IMTSO‐GCN, and the experimental results show that the <jats:italic>R<jats:sup>2</jats:sup></jats:italic> of the IMTSO‐GCN is greater than 0.99, RMSE is less than 3, the Score error is within 1.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robust control charts are getting vital importance in statistical process control theory as they are insensitive to the departure from normality. Therefore, the main objective of current work is to determine the effects of non‐normal process on the Exponentially Weighted Moving Average (EWMA) control chart. To achieve this goal, the sensitivity and reliability comparisons are made under the non‐normal process by comparing five robust M‐scale estimators, suggested in literature to modify the EWMA control limits for monitoring process mean and on the basis of percentile bootstrap estimator. The paper addresses the run length (RL) distribution of a robust EWMA control chart under the non‐normal process for which the exponential distribution is used as non‐normal process. The standard deviation of RL, out‐of‐control average run length (ARL), and shift detection probabilities are examined to assess the sensitivity and reliability of robust EWMA control charts for mean of monitoring process. The results of this research indicate that the classical EWMA control chart's performance is substantially impacted by the non‐normal distribution and the proposed EWMA control charts show higher sensitivity than classical one in terms of having smaller values of out‐of‐control ARLs. A real‐life example from the medical sciences field is provided the practical usage of the proposed control charts. The simulation analysis and practical example have shown that the suggested control charts are effective in quickly monitoring the out‐of‐control process.
{"title":"Sensitivity and reliability comparisons of EWMA mean control chart based on robust scale estimators under non‐normal process: COVID data application","authors":"Nadia Saeed, Ala'a Mahmoud Falih Bataineh, Moustafa Omar Ahmed Abu‐Shawiesh, Firas Haddad","doi":"10.1002/qre.3649","DOIUrl":"https://doi.org/10.1002/qre.3649","url":null,"abstract":"Robust control charts are getting vital importance in statistical process control theory as they are insensitive to the departure from normality. Therefore, the main objective of current work is to determine the effects of non‐normal process on the Exponentially Weighted Moving Average (EWMA) control chart. To achieve this goal, the sensitivity and reliability comparisons are made under the non‐normal process by comparing five robust M‐scale estimators, suggested in literature to modify the EWMA control limits for monitoring process mean and on the basis of percentile bootstrap estimator. The paper addresses the run length (RL) distribution of a robust EWMA control chart under the non‐normal process for which the exponential distribution is used as non‐normal process. The standard deviation of RL, out‐of‐control average run length (ARL), and shift detection probabilities are examined to assess the sensitivity and reliability of robust EWMA control charts for mean of monitoring process. The results of this research indicate that the classical EWMA control chart's performance is substantially impacted by the non‐normal distribution and the proposed EWMA control charts show higher sensitivity than classical one in terms of having smaller values of out‐of‐control ARLs. A real‐life example from the medical sciences field is provided the practical usage of the proposed control charts. The simulation analysis and practical example have shown that the suggested control charts are effective in quickly monitoring the out‐of‐control process.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Lamalle, Vincent Feuillard, Anne Sabourin, Stephan Clémençon
It is the purpose of this paper to propose a novel clustering technique tailored to randomly censored data in reliability/survival analysis. It is based on an underlying mixture model of Weibull distributions and consists in estimating its parameters by means of a variant of the Expectation–Maximization method in the presence of random censorship. Beyond the description of the algorithm, model selection issues are addressed and we investigate its performance from an empirical perspective by applying it to possibly strongly censored (synthetic and real) survival data. The experiments carried out provide strong empirical evidence that our algorithm performs better than alternative methods standing as natural competitors in this framework.
{"title":"Weibull mixture estimation based on censored data with applications to clustering in reliability engineering","authors":"Florian Lamalle, Vincent Feuillard, Anne Sabourin, Stephan Clémençon","doi":"10.1002/qre.3647","DOIUrl":"https://doi.org/10.1002/qre.3647","url":null,"abstract":"It is the purpose of this paper to propose a novel clustering technique tailored to randomly censored data in reliability/survival analysis. It is based on an underlying mixture model of Weibull distributions and consists in estimating its parameters by means of a variant of the <jats:italic>Expectation–Maximization</jats:italic> method in the presence of random censorship. Beyond the description of the algorithm, model selection issues are addressed and we investigate its performance from an empirical perspective by applying it to possibly strongly censored (synthetic and real) survival data. The experiments carried out provide strong empirical evidence that our algorithm performs better than alternative methods standing as natural competitors in this framework.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Obst, Sandra Claudel, Jairo Cugliari, Badih Ghattas, Yannig Goude, Georges Oppenheim
Traditional mid‐term electricity forecasting models rely on calendar and meteorological information such as temperature and wind speed to achieve high performance. However depending on such variables has drawbacks, as they may not be informative enough during extreme weather. While ubiquitous, textual sources of information are hardly included in prediction algorithms for time series, despite the relevant information they may contain. In this work, we propose to leverage openly accessible weather reports for electricity demand and meteorological time series prediction problems. Our experiments on French and British load data show that the considered textual sources allow to improve overall accuracy of the reference model, particularly during extreme weather events such as storms or abnormal temperatures. Additionally, we apply our approach to the problem of imputation of missing values in meteorological time series, and we show that our text‐based approach beats standard methods. Furthermore, the influence of words on the time series' predictions can be interpreted for the considered encoding schemes of the text, leading to a greater confidence in our results.
{"title":"Textual data for electricity load forecasting","authors":"David Obst, Sandra Claudel, Jairo Cugliari, Badih Ghattas, Yannig Goude, Georges Oppenheim","doi":"10.1002/qre.3637","DOIUrl":"https://doi.org/10.1002/qre.3637","url":null,"abstract":"Traditional mid‐term electricity forecasting models rely on calendar and meteorological information such as temperature and wind speed to achieve high performance. However depending on such variables has drawbacks, as they may not be informative enough during extreme weather. While ubiquitous, textual sources of information are hardly included in prediction algorithms for time series, despite the relevant information they may contain. In this work, we propose to leverage openly accessible weather reports for electricity demand and meteorological time series prediction problems. Our experiments on French and British load data show that the considered textual sources allow to improve overall accuracy of the reference model, particularly during extreme weather events such as storms or abnormal temperatures. Additionally, we apply our approach to the problem of imputation of missing values in meteorological time series, and we show that our text‐based approach beats standard methods. Furthermore, the influence of words on the time series' predictions can be interpreted for the considered encoding schemes of the text, leading to a greater confidence in our results.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article we consider the meta‐analysis of stage life testing experiments. We propose a method to combine the data obtained from number of independent stage life testing experiments. We have assumed that there are only two stress levels for each stage life testing experiment and lifetime of the experimental units follows Weibull distribution at each stress level. The distributions under two stress levels are connected through Khamis–Higgings model assumption. We assume that the shape parameters of Weibull distribution are same for all the samples; however, the scale parameters are different. We provide the maximum likelihood estimation and the asymptotic confidence intervals of the model parameters. We also provide the Bayesian inference of the model parameters. The Bayes estimates and the associated credible intervals are obtained using Gibbs sampling technique since the explicit forms of the Bayes estimates do not exist. We have performed an extensive simulation study to see the performances of the different estimators, and the analyses of two data sets for illustrative purpose. The results are quite satisfactory.
{"title":"Inference of multi‐sample stage life testing model under Weibull distribution","authors":"Debashis Samanta, Debasis Kundu","doi":"10.1002/qre.3642","DOIUrl":"https://doi.org/10.1002/qre.3642","url":null,"abstract":"In this article we consider the meta‐analysis of stage life testing experiments. We propose a method to combine the data obtained from number of independent stage life testing experiments. We have assumed that there are only two stress levels for each stage life testing experiment and lifetime of the experimental units follows Weibull distribution at each stress level. The distributions under two stress levels are connected through Khamis–Higgings model assumption. We assume that the shape parameters of Weibull distribution are same for all the samples; however, the scale parameters are different. We provide the maximum likelihood estimation and the asymptotic confidence intervals of the model parameters. We also provide the Bayesian inference of the model parameters. The Bayes estimates and the associated credible intervals are obtained using Gibbs sampling technique since the explicit forms of the Bayes estimates do not exist. We have performed an extensive simulation study to see the performances of the different estimators, and the analyses of two data sets for illustrative purpose. The results are quite satisfactory.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The objective of this paper is to identify the most sensitive component of a turbogenerator and optimize its availability. To achieve this, we begin by conducting an initial reliability, availability, maintainability, and dependability (RAMD) analysis on each component. Subsequently, a novel stochastic model is developed to analyze the steady‐state availability of the turbogenerator, employing a Markov birth‐death process. In this model, failure and repair rates are assumed to follow an exponential distribution and are statistically independent. To optimize the proposed stochastic model, we employ four population‐based meta‐heuristic approaches: the grey wolf optimization (GWO), the dragonfly algorithm (DA), the grasshopper optimization algorithm (GOA), and the whale optimization algorithm (WOA). These algorithms are utilized to find the optimal solution by iteratively improving the availability of the turbogenerator. The performance of each algorithm is evaluated in terms of system availability and execution time, allowing us to identify the most efficient algorithm for this task. Based on the numerical results, it is evident that the WOA outperforms the GWO, GOA, and DA in terms of both system availability and execution time.
{"title":"Stochastic modeling and optimization of turbogenerator performance using meta‐heuristic techniques","authors":"Deepak Sinwar, Naveen Kumar, Ashish Kumar, Monika Saini","doi":"10.1002/qre.3639","DOIUrl":"https://doi.org/10.1002/qre.3639","url":null,"abstract":"The objective of this paper is to identify the most sensitive component of a turbogenerator and optimize its availability. To achieve this, we begin by conducting an initial reliability, availability, maintainability, and dependability (RAMD) analysis on each component. Subsequently, a novel stochastic model is developed to analyze the steady‐state availability of the turbogenerator, employing a Markov birth‐death process. In this model, failure and repair rates are assumed to follow an exponential distribution and are statistically independent. To optimize the proposed stochastic model, we employ four population‐based meta‐heuristic approaches: the grey wolf optimization (GWO), the dragonfly algorithm (DA), the grasshopper optimization algorithm (GOA), and the whale optimization algorithm (WOA). These algorithms are utilized to find the optimal solution by iteratively improving the availability of the turbogenerator. The performance of each algorithm is evaluated in terms of system availability and execution time, allowing us to identify the most efficient algorithm for this task. Based on the numerical results, it is evident that the WOA outperforms the GWO, GOA, and DA in terms of both system availability and execution time.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dániel Boros, Bálint Csanády, Iván Ivkovic, Lóránt Nagy, András Lukács, László Márkus
This research explores the reliability of deep learning, specifically Long Short‐Term Memory (LSTM) networks, for estimating the Hurst parameter in fractional stochastic processes. The study focuses on three types of processes: fractional Brownian motion (fBm), fractional Ornstein–Uhlenbeck (fOU) process, and linear fractional stable motions (lfsm). The work involves a fast generation of extensive datasets for fBm and fOU to train the LSTM network on a large volume of data in a feasible time. The study analyses the accuracy of the LSTM network's Hurst parameter estimation regarding various performance measures like root mean squared error (RMSE), mean absolute error (MAE), mean relative error (MRE), and quantiles of the absolute and relative errors. It finds that LSTM outperforms the traditional statistical methods in the case of fBm and fOU processes; however, it has limited accuracy on lfsm processes. The research also delves into the implications of training length and valuation sequence length on the LSTM's performance. The methodology is applied to estimating the Hurst parameter in li‐ion battery degradation data and obtaining confidence bounds for the estimation. The study concludes that while deep learning methods show promise in parameter estimation of fractional processes, their effectiveness is contingent on the process type and the quality of training data.
{"title":"Deep learning the Hurst parameter of linear fractional processes and assessing its reliability","authors":"Dániel Boros, Bálint Csanády, Iván Ivkovic, Lóránt Nagy, András Lukács, László Márkus","doi":"10.1002/qre.3641","DOIUrl":"https://doi.org/10.1002/qre.3641","url":null,"abstract":"This research explores the reliability of deep learning, specifically Long Short‐Term Memory (LSTM) networks, for estimating the Hurst parameter in fractional stochastic processes. The study focuses on three types of processes: fractional Brownian motion (fBm), fractional Ornstein–Uhlenbeck (fOU) process, and linear fractional stable motions (lfsm). The work involves a fast generation of extensive datasets for fBm and fOU to train the LSTM network on a large volume of data in a feasible time. The study analyses the accuracy of the LSTM network's Hurst parameter estimation regarding various performance measures like root mean squared error (RMSE), mean absolute error (MAE), mean relative error (MRE), and quantiles of the absolute and relative errors. It finds that LSTM outperforms the traditional statistical methods in the case of fBm and fOU processes; however, it has limited accuracy on lfsm processes. The research also delves into the implications of training length and valuation sequence length on the LSTM's performance. The methodology is applied to estimating the Hurst parameter in li‐ion battery degradation data and obtaining confidence bounds for the estimation. The study concludes that while deep learning methods show promise in parameter estimation of fractional processes, their effectiveness is contingent on the process type and the quality of training data.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noé Fellmann, M. Pasquier, C. Blanchet‐Scalliet, C. Helbert, A. Spagnol, D. Sinoquet
We are motivated by the field of air quality control, where one goal is to quantify the impact of uncertain inputs such as meteorological conditions and traffic parameters on pollutant dispersion maps. Sensitivity analysis is one answer, but the majority of sensitivity analysis methods are designed to deal with scalar or vector outputs and are badly suited to an output space of maps. To address this problem, we propose a generic approach to sensitivity analysis of set‐valued models. This approach can be applied to the case of maps. We propose and study three different types of sensitivity indices. The first ones are inspired by Sobol' indices but adapted to sets based on the theory of random sets. The second ones adapt universal indices defined for a general metric output space. The last set of indices uses kernel‐based sensitivity indices adapted to sets. The proposed methods are implemented and tested to perform an uncertainty analysis for a toy excursion set problem and for time‐averaged concentration maps of pollutants in an urban environment.
{"title":"Sensitivity analysis for sets: Application to pollutant concentration maps","authors":"Noé Fellmann, M. Pasquier, C. Blanchet‐Scalliet, C. Helbert, A. Spagnol, D. Sinoquet","doi":"10.1002/qre.3638","DOIUrl":"https://doi.org/10.1002/qre.3638","url":null,"abstract":"We are motivated by the field of air quality control, where one goal is to quantify the impact of uncertain inputs such as meteorological conditions and traffic parameters on pollutant dispersion maps. Sensitivity analysis is one answer, but the majority of sensitivity analysis methods are designed to deal with scalar or vector outputs and are badly suited to an output space of maps. To address this problem, we propose a generic approach to sensitivity analysis of set‐valued models. This approach can be applied to the case of maps. We propose and study three different types of sensitivity indices. The first ones are inspired by Sobol' indices but adapted to sets based on the theory of random sets. The second ones adapt <jats:italic>universal</jats:italic> indices defined for a general metric output space. The last set of indices uses kernel‐based sensitivity indices adapted to sets. The proposed methods are implemented and tested to perform an uncertainty analysis for a toy excursion set problem and for time‐averaged concentration maps of pollutants in an urban environment.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taotao Cheng, Diqing Fan, Xintian Liu, JinGang Wang
Accurately analyzing the reliability of driveshaft systems is crucial in engineering vehicles and mechanical equipment. A complex system reliability modeling and analysis method based on a dynamic Bayesian network (DBN) is proposed to repair accurately and reduce the cost in time. Considering the logical structure of the drive shaft system, the reliability block diagram (RBD) of the manufacturing system is constructed in a hierarchical and graded manner, and a method of obtaining the Bayesian network (BN) directly from the RBD is adopted based on the conversion relationship between the RBD, fault tree and BN. A variable‐structure DBN model of the system is constructed based on a static BN extended in time series and incorporating dynamic reliability parameters of the components. Reliability analyses based on DBN reasoning, including reliability assessment, significance metrics, and sensitivity analyses, were performed to identify critical subsystems and critical components. This research contributes to enhancing product reliability, equipment utilization, and improving economic efficiency.
{"title":"Reliability analysis for manufacturing system of drive shaft based on dynamic Bayesian network","authors":"Taotao Cheng, Diqing Fan, Xintian Liu, JinGang Wang","doi":"10.1002/qre.3644","DOIUrl":"https://doi.org/10.1002/qre.3644","url":null,"abstract":"Accurately analyzing the reliability of driveshaft systems is crucial in engineering vehicles and mechanical equipment. A complex system reliability modeling and analysis method based on a dynamic Bayesian network (DBN) is proposed to repair accurately and reduce the cost in time. Considering the logical structure of the drive shaft system, the reliability block diagram (RBD) of the manufacturing system is constructed in a hierarchical and graded manner, and a method of obtaining the Bayesian network (BN) directly from the RBD is adopted based on the conversion relationship between the RBD, fault tree and BN. A variable‐structure DBN model of the system is constructed based on a static BN extended in time series and incorporating dynamic reliability parameters of the components. Reliability analyses based on DBN reasoning, including reliability assessment, significance metrics, and sensitivity analyses, were performed to identify critical subsystems and critical components. This research contributes to enhancing product reliability, equipment utilization, and improving economic efficiency.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}