The step‐stress procedure is a popular accelerated test used to analyze the lifetime of highly reliable components. This paper considers a simple step‐stress accelerated test assuming a cumulative exposure model with uncensored lifetime data following a Weibull distribution. The maximum likelihood approach is often used to analyze accelerated stress test data. Another approach is to use the Bayesian inference, which is useful when there is limited data available. In this paper, the parameters of the model are estimated based on the objective Bayesian viewpoint using non‐informative priors. Our main aim is to propose the maximal data information prior (MDIP) presented by Zellner (1984) as an alternative prior to the conventional independent gamma priors for the unknown parameters, in situations where there is little or no a priori knowledge about the parameters. We also obtain the Bayes estimators based on both classes of priors, assuming three different loss functions: square error loss function (SELF), linear‐exponential loss function (LINEX), and generalized entropy loss function (GELF). The proposed MDIP prior is compared with the gamma priors via Monte Carlo simulations by examining their biases and mean square errors under the three loss functions, and coverage probability. Additionally, we employ the Markov Chain Monte Carlo (MCMC) algorithm to extract characteristics of marginal posterior distributions, such as the Bayes estimator and credible intervals. Finally, a real lifetime data is presented to illustrate the proposed methodology.
{"title":"Maximal entropy prior for the simple step‐stress accelerated test","authors":"Fernando Antonio Moala, Karlla Delalibera Chagas","doi":"10.1002/qre.3609","DOIUrl":"https://doi.org/10.1002/qre.3609","url":null,"abstract":"The step‐stress procedure is a popular accelerated test used to analyze the lifetime of highly reliable components. This paper considers a simple step‐stress accelerated test assuming a cumulative exposure model with uncensored lifetime data following a Weibull distribution. The maximum likelihood approach is often used to analyze accelerated stress test data. Another approach is to use the Bayesian inference, which is useful when there is limited data available. In this paper, the parameters of the model are estimated based on the objective Bayesian viewpoint using non‐informative priors. Our main aim is to propose the maximal data information prior (MDIP) presented by Zellner (1984) as an alternative prior to the conventional independent gamma priors for the unknown parameters, in situations where there is little or no a priori knowledge about the parameters. We also obtain the Bayes estimators based on both classes of priors, assuming three different loss functions: square error loss function (SELF), linear‐exponential loss function (LINEX), and generalized entropy loss function (GELF). The proposed MDIP prior is compared with the gamma priors via Monte Carlo simulations by examining their biases and mean square errors under the three loss functions, and coverage probability. Additionally, we employ the Markov Chain Monte Carlo (MCMC) algorithm to extract characteristics of marginal posterior distributions, such as the Bayes estimator and credible intervals. Finally, a real lifetime data is presented to illustrate the proposed methodology.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Han, James D. Brownlow, Jesse Thompson, Ralph G. Brooks
Ensuring an acceptable level of reliability stands as a primary imperative for any mission‐focused operation since it serves as a critical determinant of success. Inadequate reliability can lead to severe repercussions, including substantial expenses for repairs and replacements, missed opportunities, service disruptions, and in the worst cases, safety violations and human casualties. Within national defense organizations such as the USAF, the precise assessment and maintenance of system reliability play a pivotal role in ensuring the success of mission‐critical operations. In this research, our primary objective is to model the reliability of repairable subsystems within the framework of competing and complementary risks. Subsequently, we construct the overall reliability of the entire repairable system, utilizing day‐to‐day group‐censored maintenance data from two identical aircraft systems. Assuming that the lifetimes of subsystems follow non‐identical exponential distributions, it is theoretically justified that the system reliability can be modeled by homogeneous Poisson processes even though the number of subsystems of any particular type is unknown and the temporal order of multiple subsystem failures within a given time interval is uncertain due to interval censoring. Using the proposed model, we formulate the likelihood function for the mean time between failures of subsystems with different causes, and subsequently establish an inferential procedure for the model parameters. Given a considerable number of parameters to estimate, we explore the efficacy of a Bayesian approach, treating the contractor‐supplied estimates as the hyperparameters of prior distributions. This approach mitigates potential model uncertainty as well as the practical limitation of a frequentist‐based approach. It also facilitates continuous updates of the estimates as new maintenance data become available. Finally, the entire inferential procedures were implemented in Microsoft Excel so that it is easy for any reliability practitioner to use without the need to learn sophisticated programming languages. Thus, this research supports an ongoing, real‐time assessment of the overall mission reliability and helps early detection of any subsystem whose reliability is below the threshold level.
确保可接受的可靠性水平是任何以任务为中心的运行的首要任务,因为它是决定成败的关键因素。可靠性不足会导致严重后果,包括维修和更换的巨额费用、错失良机、服务中断,最严重的情况下还会导致违反安全规定和人员伤亡。在美国空军等国防组织中,系统可靠性的精确评估和维护对确保关键任务的成功执行起着举足轻重的作用。在这项研究中,我们的主要目标是在相互竞争和互补的风险框架内建立可修复子系统的可靠性模型。随后,我们利用来自两个相同飞机系统的每日组删减维护数据,构建了整个可修复系统的总体可靠性。假定子系统的寿命遵循非同指数分布,那么即使任何特定类型的子系统数量未知,并且由于时间间隔删失,给定时间间隔内多个子系统故障的时间顺序不确定,系统可靠性仍可由同质泊松过程建模,这在理论上是合理的。利用所提出的模型,我们提出了不同原因子系统故障平均间隔时间的似然函数,并随后建立了模型参数的推断程序。鉴于需要估计的参数数量相当多,我们探讨了贝叶斯方法的有效性,将承包商提供的估计值视为先验分布的超参数。这种方法缓解了潜在的模型不确定性以及基于频数法的实际局限性。它还有利于在获得新的维护数据时不断更新估计值。最后,整个推论程序都是在 Microsoft Excel 中实现的,因此任何可靠性从业人员都可以轻松使用,无需学习复杂的编程语言。因此,这项研究支持对整个任务的可靠性进行持续、实时的评估,并有助于及早发现可靠性低于临界值的任何子系统。
{"title":"Bayesian estimation of the mean time between failures of subsystems with different causes using interval‐censored system maintenance data","authors":"David Han, James D. Brownlow, Jesse Thompson, Ralph G. Brooks","doi":"10.1002/qre.3606","DOIUrl":"https://doi.org/10.1002/qre.3606","url":null,"abstract":"Ensuring an acceptable level of reliability stands as a primary imperative for any mission‐focused operation since it serves as a critical determinant of success. Inadequate reliability can lead to severe repercussions, including substantial expenses for repairs and replacements, missed opportunities, service disruptions, and in the worst cases, safety violations and human casualties. Within national defense organizations such as the USAF, the precise assessment and maintenance of system reliability play a pivotal role in ensuring the success of mission‐critical operations. In this research, our primary objective is to model the reliability of repairable subsystems within the framework of competing and complementary risks. Subsequently, we construct the overall reliability of the entire repairable system, utilizing day‐to‐day group‐censored maintenance data from two identical aircraft systems. Assuming that the lifetimes of subsystems follow non‐identical exponential distributions, it is theoretically justified that the system reliability can be modeled by homogeneous Poisson processes even though the number of subsystems of any particular type is unknown and the temporal order of multiple subsystem failures within a given time interval is uncertain due to interval censoring. Using the proposed model, we formulate the likelihood function for the mean time between failures of subsystems with different causes, and subsequently establish an inferential procedure for the model parameters. Given a considerable number of parameters to estimate, we explore the efficacy of a Bayesian approach, treating the contractor‐supplied estimates as the hyperparameters of prior distributions. This approach mitigates potential model uncertainty as well as the practical limitation of a frequentist‐based approach. It also facilitates continuous updates of the estimates as new maintenance data become available. Finally, the entire inferential procedures were implemented in Microsoft Excel so that it is easy for any reliability practitioner to use without the need to learn sophisticated programming languages. Thus, this research supports an ongoing, real‐time assessment of the overall mission reliability and helps early detection of any subsystem whose reliability is below the threshold level.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When the Cpk sampling plan is in use, a sample of size (n) is taken from the lot, and the mean and the standard deviation of the sample observations are used to obtain the —the sample estimator of the Cpk index; if the is lower than a threshold (c0), then the lot is rejected, otherwise, the lot is accepted. In recent studies, the α and β risks are the risks of rejecting/accepting lots of items produced by in‐control/out‐of‐control processes with Cpks equal to (, that is, the two parameters (n, c0) of the Cpk sampling plans depend on the inputs (α, , βmax¸ ). When the design of the Cpk sampling plan is based on the inputs C0 and C1, the optimum sample size is always big and, excluding the cases where the magnitude of the mean shift is too small, the β risks associated to the combinations of mean shifts and variance increases (determined by the inputs C0 and C1) are always equal to the maximum allowed value βmax. This fact motivated us to compare the Cpk sampling plan with the (X‐bar, S) sampling plan, where the sample mean (X‐bar) and the sample standard deviation (S) are directly compared with thresholds. For most disturbances, the (X‐bar, S) sampling plan requires smaller samples to meet the condition of β ≤ βmax, that is, for a fixed sample size, it is always possible to find endless combinations of mean shifts and variance increases where the β risks of the (X‐bar, S) sampling plan are β = βmax, and the β risks of the Cpk sampling plan are β > βmax.
{"title":"Comparing the performance of the Cpk and (X‐bar, S) sampling plans","authors":"Antonio Fernando Branco Costa","doi":"10.1002/qre.3605","DOIUrl":"https://doi.org/10.1002/qre.3605","url":null,"abstract":"When the Cpk sampling plan is in use, a sample of size (n) is taken from the lot, and the mean and the standard deviation of the sample observations are used to obtain the —the sample estimator of the Cpk index; if the is lower than a threshold (c0), then the lot is rejected, otherwise, the lot is accepted. In recent studies, the α and β risks are the risks of rejecting/accepting lots of items produced by in‐control/out‐of‐control processes with Cpks equal to (, that is, the two parameters (n, c0) of the Cpk sampling plans depend on the inputs (α, , βmax¸ ). When the design of the Cpk sampling plan is based on the inputs C0 and C1, the optimum sample size is always big and, excluding the cases where the magnitude of the mean shift is too small, the β risks associated to the combinations of mean shifts and variance increases (determined by the inputs C0 and C1) are always equal to the maximum allowed value βmax. This fact motivated us to compare the Cpk sampling plan with the (X‐bar, S) sampling plan, where the sample mean (X‐bar) and the sample standard deviation (S) are directly compared with thresholds. For most disturbances, the (X‐bar, S) sampling plan requires smaller samples to meet the condition of β ≤ βmax, that is, for a fixed sample size, it is always possible to find endless combinations of mean shifts and variance increases where the β risks of the (X‐bar, S) sampling plan are β = βmax, and the β risks of the Cpk sampling plan are β > βmax.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141349639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electromagnets are often used in indirect control for industrial applications. The ability of an electromagnet to control objects should decrease with performance degradation. And electromagnets product poses a danger to people and objects in the working environment. So, it is very difficult to analyze the reliability of electromagnetic performance degradation because of the complicated working condition. Failure Mode and Effect Analysis (FMEA) is the most commonly used tool for product reliability analysis. The new version of FMEA uses integer as evaluation value, which cannot represent the hesitation psychology of the evaluator. The Action Priority (AP) table of the FMEA describes the relationship between the evaluation of influencing factors and the risk level of the failure mode, which provides rules for determining the risk level of the failure mode. However, the AP table may result in multiple failure modes having the same ranking, which does not align with the intention of FMEA to prevent failures. Therefore, this paper proposes a reliability analysis method for electromagnetic performance degradation based on FMEA and FIS. Firstly, the Double Hierarchy Hesitant Fuzzy Linguistic Term Set (DHHFLTS) is used as the evaluation language to describe the hesitation psychology of evaluators. Secondly, the AP table of FMEA is used as FIS fuzzy inference rule. In this way, the idea of FMEA to determine the risk level of failure mode is retained and the problem of FIS fuzzy rule making is overcome. Then, FIS defuzzification AP table inference results to determine the risk ranking of failure modes. This avoids situations where the order of failure modes is equal. Finally, a performance degradation model of the electromagnet is constructed based on the Wiener process, and the calculation results of the new method are verified.
{"title":"A reliability analysis method for electromagnet performance degradation based on FMEA and fuzzy inference system","authors":"Jihong Pang, Jinkun Dai, Xinze Lian, Zhigang Ding","doi":"10.1002/qre.3602","DOIUrl":"https://doi.org/10.1002/qre.3602","url":null,"abstract":"Electromagnets are often used in indirect control for industrial applications. The ability of an electromagnet to control objects should decrease with performance degradation. And electromagnets product poses a danger to people and objects in the working environment. So, it is very difficult to analyze the reliability of electromagnetic performance degradation because of the complicated working condition. Failure Mode and Effect Analysis (FMEA) is the most commonly used tool for product reliability analysis. The new version of FMEA uses integer as evaluation value, which cannot represent the hesitation psychology of the evaluator. The Action Priority (AP) table of the FMEA describes the relationship between the evaluation of influencing factors and the risk level of the failure mode, which provides rules for determining the risk level of the failure mode. However, the AP table may result in multiple failure modes having the same ranking, which does not align with the intention of FMEA to prevent failures. Therefore, this paper proposes a reliability analysis method for electromagnetic performance degradation based on FMEA and FIS. Firstly, the Double Hierarchy Hesitant Fuzzy Linguistic Term Set (DHHFLTS) is used as the evaluation language to describe the hesitation psychology of evaluators. Secondly, the AP table of FMEA is used as FIS fuzzy inference rule. In this way, the idea of FMEA to determine the risk level of failure mode is retained and the problem of FIS fuzzy rule making is overcome. Then, FIS defuzzification AP table inference results to determine the risk ranking of failure modes. This avoids situations where the order of failure modes is equal. Finally, a performance degradation model of the electromagnet is constructed based on the Wiener process, and the calculation results of the new method are verified.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141366843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A system consisting of interconnected components in series is under consideration. This research focuses on estimating the parameters of this system for incomplete lifetime data within the framework of competing risks, employing an underlying inverse Weibull distribution. While one popular method for parameter estimation involves the Newton–Raphson (NR) technique, its sensitivity to initial value selection poses a significant drawback, often resulting in convergence failures. Therefore, this paper opts for the expectation–maximization (EM) algorithm. In competing risks scenarios, the precise cause of failure is frequently unidentified, and these issues can be further complicated by potential censoring. Thus, incompleteness may arise due to both censoring and masking. In this study, we present the EM‐type parameter estimation and demonstrate its superiority over parameter estimation based on the NR method. Two illustrative examples are provided. The proposed method is compared with the existing Weibull competing risks model, revealing the superiority of our approach. Through Monte Carlo simulations, we also examine the sensitivity of the initial value selection for both the NR‐type method and our proposed method.
我们正在研究一个由相互连接的串联部件组成的系统。本研究的重点是在竞争风险的框架内,利用基本的反向威布尔分布,对该系统的不完整寿命数据进行参数估计。虽然一种常用的参数估计方法是牛顿-拉斐森(NR)技术,但它对初始值选择的敏感性是一个重大缺陷,经常导致收敛失败。因此,本文选择了期望最大化(EM)算法。在竞争风险情景中,失败的确切原因往往无法确定,而这些问题可能因潜在的普查而变得更加复杂。因此,普查和掩蔽都可能导致不完整性。在本研究中,我们提出了 EM 型参数估计方法,并证明其优于基于 NR 方法的参数估计。本文提供了两个示例。我们将提出的方法与现有的 Weibull 竞争风险模型进行了比较,结果显示了我们方法的优越性。通过蒙特卡罗模拟,我们还考察了 NR 型方法和我们提出的方法对初始值选择的敏感性。
{"title":"Parameter estimation of inverse Weibull distribution under competing risks based on the expectation–maximization algorithm","authors":"R. Alotaibi, H. Rezk, C. Park","doi":"10.1002/qre.3599","DOIUrl":"https://doi.org/10.1002/qre.3599","url":null,"abstract":"A system consisting of interconnected components in series is under consideration. This research focuses on estimating the parameters of this system for incomplete lifetime data within the framework of competing risks, employing an underlying inverse Weibull distribution. While one popular method for parameter estimation involves the Newton–Raphson (NR) technique, its sensitivity to initial value selection poses a significant drawback, often resulting in convergence failures. Therefore, this paper opts for the expectation–maximization (EM) algorithm. In competing risks scenarios, the precise cause of failure is frequently unidentified, and these issues can be further complicated by potential censoring. Thus, incompleteness may arise due to both censoring and masking. In this study, we present the EM‐type parameter estimation and demonstrate its superiority over parameter estimation based on the NR method. Two illustrative examples are provided. The proposed method is compared with the existing Weibull competing risks model, revealing the superiority of our approach. Through Monte Carlo simulations, we also examine the sensitivity of the initial value selection for both the NR‐type method and our proposed method.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141366825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional control charts depend on the process parameters that are used to monitor the shifts in the process. The adaptive control charts are used to adapt a process parameter during the online monitoring. This research introduces a support vector regression (SVR) based adaptive exponentially weighted moving average control chat to enhance the monitoring of the mean in industrial processes. The study systematically investigates the comparative efficiency of linear, radial basis function (RBF), and polynomial functions within the SVR framework. The proposed SVR‐based AEWMA control chart leverages the strengths of the RBF kernel, providing a robust mechanism for detecting shifts in the process mean by adapting the smoothing constant according to the size of the shift. To validate the efficacy of the proposed methodology, a practical application is presented by using real‐life data. The application showcases the adaptability and reliability of the SVR‐based adaptive EWMA control chart in effectively monitoring location shifts.
{"title":"Adaptive EWMA control chart by using support vector regression","authors":"Muhammad Waqas Kazmi, Muhammad Noor‐ul‐Amin","doi":"10.1002/qre.3603","DOIUrl":"https://doi.org/10.1002/qre.3603","url":null,"abstract":"Traditional control charts depend on the process parameters that are used to monitor the shifts in the process. The adaptive control charts are used to adapt a process parameter during the online monitoring. This research introduces a support vector regression (SVR) based adaptive exponentially weighted moving average control chat to enhance the monitoring of the mean in industrial processes. The study systematically investigates the comparative efficiency of linear, radial basis function (RBF), and polynomial functions within the SVR framework. The proposed SVR‐based AEWMA control chart leverages the strengths of the RBF kernel, providing a robust mechanism for detecting shifts in the process mean by adapting the smoothing constant according to the size of the shift. To validate the efficacy of the proposed methodology, a practical application is presented by using real‐life data. The application showcases the adaptability and reliability of the SVR‐based adaptive EWMA control chart in effectively monitoring location shifts.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141369303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Z. Nazir, Muhammad Ikram, M. W. Amir, Noureen Akhtar, Zameer Abbas, Muhammad Riaz
In the field of quality control, statistical process control (SPC) has its significance. The control charts are important tools of the SPC to observe the outputs of the production process. The cumulative sum (CUSUM) charting structure is one such technique that is designed to identify the medium and slight changes in the process parameters. In the literature, the assumption of normality is considered ideal for the control chart but in practice, many quality characteristics do not follow the assumption of normality. The current study proposes a class of CUSUM control charts for the location parameter of the process and investigates the performance of said location charts based on ranked set sampling (RSS) for quality characteristics having different environments. Different point estimators of location are considered in this study to develop the location control charts under normal, and a variety of non‐normal environments. The numerical results show that the newly designed schemes perform uniformly well than their competitors. A real‐life example linked with the manufacturing process is also provided for the practical implementation of the proposed scheme.
{"title":"Proper choice of location cumulative sum control charts for different environments","authors":"H. Z. Nazir, Muhammad Ikram, M. W. Amir, Noureen Akhtar, Zameer Abbas, Muhammad Riaz","doi":"10.1002/qre.3598","DOIUrl":"https://doi.org/10.1002/qre.3598","url":null,"abstract":"In the field of quality control, statistical process control (SPC) has its significance. The control charts are important tools of the SPC to observe the outputs of the production process. The cumulative sum (CUSUM) charting structure is one such technique that is designed to identify the medium and slight changes in the process parameters. In the literature, the assumption of normality is considered ideal for the control chart but in practice, many quality characteristics do not follow the assumption of normality. The current study proposes a class of CUSUM control charts for the location parameter of the process and investigates the performance of said location charts based on ranked set sampling (RSS) for quality characteristics having different environments. Different point estimators of location are considered in this study to develop the location control charts under normal, and a variety of non‐normal environments. The numerical results show that the newly designed schemes perform uniformly well than their competitors. A real‐life example linked with the manufacturing process is also provided for the practical implementation of the proposed scheme.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141372984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning models are widely used to decide whether to accept or reject credit loan applications. However, similarly to human‐based decisions, they may discriminate between special groups of applicants, for instance based on age, gender, and race. In this paper, we aim to understand whether machine learning credit lending models are biased in a real case study, that concerns borrowers asking for credits in different regions of the United States. We show how to measure model fairness using different metrics, and we explore the capability of explainable machine learning to add further insights. From a constructive viewpoint, we propose a propensity matching approach that can improve fairness.
{"title":"How fair is machine learning in credit lending?","authors":"G. Babaei, Paolo Giudici","doi":"10.1002/qre.3579","DOIUrl":"https://doi.org/10.1002/qre.3579","url":null,"abstract":"Machine learning models are widely used to decide whether to accept or reject credit loan applications. However, similarly to human‐based decisions, they may discriminate between special groups of applicants, for instance based on age, gender, and race. In this paper, we aim to understand whether machine learning credit lending models are biased in a real case study, that concerns borrowers asking for credits in different regions of the United States. We show how to measure model fairness using different metrics, and we explore the capability of explainable machine learning to add further insights. From a constructive viewpoint, we propose a propensity matching approach that can improve fairness.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141270841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliability engineering faces many of the same challenges today that it did at its inception in the 1950s. The fundamental issue remains uncertainty in system representation, specifically related to performance model structure and parameterization. Details of a design are unavailable early in the development process and therefore performance models must either account for the range of possibilities or be wrong. Increasing system complexity has compounded this uncertainty. In this work, we seek to understand how the reliability engineering literature has shifted over time. We exe cute a systematic literature review of 30,543 reliability engineering papers (covering roughly a third of the reliability papers indexed by Elsevier's Engineering Village. Topic modeling was performed on the abstracts of those papers to identify 279 topics. The hierarchical topic reduction resulted in the identification of eight top‐level method topics (prognostics, statistics, maintenance, quality control, management, physics of failure, modeling, and risk assessment) as well as three domain‐specific topics (nuclear, infrastructure, and software). We found that topics more associated with later phases in the development process (such as prognostics, maintenance, and quality control) have increased in popularity over time relative to other topics. We propose that this is a response to the challenges posed by model uncertainty and increasing complexity.
今天,可靠性工程面临着许多与二十世纪五十年代初创时相同的挑战。最根本的问题仍然是系统表示的不确定性,特别是与性能模型结构和参数化有关的不确定性。设计的细节在开发过程的早期无法获得,因此性能模型必须考虑到各种可能性,否则就是错误的。系统复杂性的增加加剧了这种不确定性。在这项工作中,我们试图了解可靠性工程文献是如何随着时间的推移而变化的。我们对 30,543 篇可靠性工程论文(约占 Elsevier's Engineering Village 索引的可靠性论文的三分之一)进行了系统的文献综述。我们对这些论文的摘要进行了主题建模,以确定 279 个主题。通过对主题进行分级,确定了八个顶级方法主题(预后、统计、维护、质量控制、管理、故障物理、建模和风险评估)以及三个特定领域主题(核、基础设施和软件)。我们发现,随着时间的推移,与开发过程后期阶段更相关的主题(如预测、维护和质量控制)相对于其他主题更受欢迎。我们认为,这是为了应对模型不确定性和复杂性增加所带来的挑战。
{"title":"Assessing changes in reliability methods over time: An unsupervised text mining approach","authors":"Charles K. Brown, Bruce G. Cameron","doi":"10.1002/qre.3596","DOIUrl":"https://doi.org/10.1002/qre.3596","url":null,"abstract":"Reliability engineering faces many of the same challenges today that it did at its inception in the 1950s. The fundamental issue remains uncertainty in system representation, specifically related to performance model structure and parameterization. Details of a design are unavailable early in the development process and therefore performance models must either account for the range of possibilities or be wrong. Increasing system complexity has compounded this uncertainty. In this work, we seek to understand how the reliability engineering literature has shifted over time. We exe cute a systematic literature review of 30,543 reliability engineering papers (covering roughly a third of the reliability papers indexed by Elsevier's Engineering Village. Topic modeling was performed on the abstracts of those papers to identify 279 topics. The hierarchical topic reduction resulted in the identification of eight top‐level method topics (prognostics, statistics, maintenance, quality control, management, physics of failure, modeling, and risk assessment) as well as three domain‐specific topics (nuclear, infrastructure, and software). We found that topics more associated with later phases in the development process (such as prognostics, maintenance, and quality control) have increased in popularity over time relative to other topics. We propose that this is a response to the challenges posed by model uncertainty and increasing complexity.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141197308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Estimating the reliability and maintainability (R & M) parameters is crucial in various industrial applications. It serves purposes such as evaluating system performance and safety, minimising the risk and cost of potential failures, and designing efficient maintenance strategies. This task becomes challenging for complex repairable systems, where failures can occur due to different causes and performance may be affected by various covariates (such as material, environment, and labour). Another challenge in R & M studies arises from the presence of censorship in failure times. Existing methodologies often fail to account for all the aforementioned aspects of system‐related data in R & M analysis. By incorporating valuable information from covariates and utilising data from censored failure times alongside complete failure data, the accuracy of R & M parameter estimation can be significantly improved. This paper develops reliability models for repairable systems with multiple failure causes in the presence of covariates. The system can also be subject to imperfect maintenance. The R & M parameters are then estimated by applying the Kijima Type I and II model's virtual age concept. The proposed technique is illustrated using two case studies on gas pipelines and aero‐engine systems. Through these case studies, we show that the proposed method not only provides more efficient estimates of the R & M parameters compared to the alternative approach, but it is also easier to apply and yields more straightforward interpretations.
在各种工业应用中,估算可靠性和可维护性(R & M)参数至关重要。它的作用包括评估系统性能和安全性,最大限度地降低潜在故障的风险和成本,以及设计高效的维护策略。对于复杂的可维护性系统来说,这项任务具有挑战性,因为发生故障的原因可能各不相同,系统性能也可能受到各种协变量(如材料、环境和劳动力)的影响。R & M 研究中的另一个挑战来自于故障时间的普查。现有的方法往往无法在 Ramp &; M 分析中考虑到系统相关数据的所有上述方面。通过将有价值的协变量信息纳入其中,并在利用完整故障数据的同时利用经过删减的故障时间数据,可以显著提高 R & M 参数估计的准确性。本文为存在协变量、具有多种故障原因的可修复系统建立了可靠性模型。系统还可能受到不完善维护的影响。然后应用 Kijima I 型和 II 型模型的虚拟年龄概念估算 R & M 参数。我们通过对天然气管道和航空发动机系统的两个案例研究来说明所提出的技术。通过这些案例研究,我们表明,与其他方法相比,拟议的方法不仅能更有效地估算 R & M 参数,而且更易于应用,并能产生更直接的解释。
{"title":"Reliability and maintainability estimation of a multi‐failure‐cause system under imperfect maintenance","authors":"Fatemeh Safaei, Sharareh Taghipour","doi":"10.1002/qre.3595","DOIUrl":"https://doi.org/10.1002/qre.3595","url":null,"abstract":"Estimating the reliability and maintainability (R & M) parameters is crucial in various industrial applications. It serves purposes such as evaluating system performance and safety, minimising the risk and cost of potential failures, and designing efficient maintenance strategies. This task becomes challenging for complex repairable systems, where failures can occur due to different causes and performance may be affected by various covariates (such as material, environment, and labour). Another challenge in R & M studies arises from the presence of censorship in failure times. Existing methodologies often fail to account for all the aforementioned aspects of system‐related data in R & M analysis. By incorporating valuable information from covariates and utilising data from censored failure times alongside complete failure data, the accuracy of R & M parameter estimation can be significantly improved. This paper develops reliability models for repairable systems with multiple failure causes in the presence of covariates. The system can also be subject to imperfect maintenance. The R & M parameters are then estimated by applying the Kijima Type I and II model's virtual age concept. The proposed technique is illustrated using two case studies on gas pipelines and aero‐engine systems. Through these case studies, we show that the proposed method not only provides more efficient estimates of the R & M parameters compared to the alternative approach, but it is also easier to apply and yields more straightforward interpretations.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}