Yufeng Wang, Yonghua Li, Dongxu Zhang, Duo Zhang, Min Chai
A novel structural reliability analysis method that combines the improved beluga whale optimization (IBWO) and the arctangent function‐based maximum entropy method (AMEM) is proposed in this paper. It aims to augment the accuracy of failure probability prediction in structural reliability analysis based on the traditional maximum entropy method (MEM). First, the arctangent function is introduced to avoid the effects of truncation error and numerical overflow in the traditional MEM. The arctangent function can nonlinearly transform the structural performance function defined on the infinite interval into a transformed performance function defined on the bounded interval. Subsequently, the undetermined Lagrange multipliers in the maximum entropy probability density function (MEPDF) of the transformed performance function are obtained using IBWO at a swifter convergence speed with heightened convergence accuracy. Finally, the MEPDF of the transformed performance function can be obtained by combining the IBWO and AMEM, and the structural failure probability can be predicted. The analysis of the metro bogie frame as an engineering example reveals that compared with the traditional MEM using the genetic algorithm to solve the Lagrange multipliers, the proposed method diminishes the relative error in failure probability prediction from 20.51% to only 0.09%. This method significantly enhances the prediction accuracy of failure probability.
本文提出了一种结合改进白鲸优化法(IBWO)和基于反正切函数的最大熵法(AMEM)的新型结构可靠性分析方法。该方法旨在提高基于传统最大熵法(MEM)的结构可靠性分析中失效概率预测的准确性。首先,引入了反正切函数,以避免传统 MEM 中截断误差和数值溢出的影响。反正切函数可以将定义在无限区间上的结构性能函数非线性地转换为定义在有界区间上的转换性能函数。随后,利用 IBWO 以更快的收敛速度和更高的收敛精度获得变换后性能函数的最大熵概率密度函数(MEPDF)中的未确定拉格朗日乘数。最后,结合 IBWO 和 AMEM 可以得到变换后性能函数的 MEPDF,并预测结构失效概率。以地铁转向架构架为例的工程分析表明,与使用遗传算法求解拉格朗日乘法器的传统 MEM 相比,所提出的方法将失效概率预测的相对误差从 20.51% 降低到仅 0.09%。该方法大大提高了故障概率的预测精度。
{"title":"A novel structural reliability analysis method combining the improved beluga whale optimization and the arctangent function‐based maximum entropy method","authors":"Yufeng Wang, Yonghua Li, Dongxu Zhang, Duo Zhang, Min Chai","doi":"10.1002/qre.3640","DOIUrl":"https://doi.org/10.1002/qre.3640","url":null,"abstract":"A novel structural reliability analysis method that combines the improved beluga whale optimization (IBWO) and the arctangent function‐based maximum entropy method (AMEM) is proposed in this paper. It aims to augment the accuracy of failure probability prediction in structural reliability analysis based on the traditional maximum entropy method (MEM). First, the arctangent function is introduced to avoid the effects of truncation error and numerical overflow in the traditional MEM. The arctangent function can nonlinearly transform the structural performance function defined on the infinite interval into a transformed performance function defined on the bounded interval. Subsequently, the undetermined Lagrange multipliers in the maximum entropy probability density function (MEPDF) of the transformed performance function are obtained using IBWO at a swifter convergence speed with heightened convergence accuracy. Finally, the MEPDF of the transformed performance function can be obtained by combining the IBWO and AMEM, and the structural failure probability can be predicted. The analysis of the metro bogie frame as an engineering example reveals that compared with the traditional MEM using the genetic algorithm to solve the Lagrange multipliers, the proposed method diminishes the relative error in failure probability prediction from 20.51% to only 0.09%. This method significantly enhances the prediction accuracy of failure probability.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The two‐parameter exponential distribution (TPED) is often used to model time‐between‐events data. In this paper, we propose CUmulative SUM and exponentially weighted moving average charts for simultaneously monitoring the parameters (location and scale) of the TPED. A key feature of the proposed charts is their straightforward post‐signal diagnostics. Monte Carlo simulations are used to estimate the zero‐state and steady‐state average run‐length (ARL) profiles of the proposed charts. The ARL performances of existing and proposed charts are assessed in terms of expected weighted run‐length and relative mean index. It is found that the proposed charts outperform the existing charts. A real dataset is used to illustrate the implementation of the proposed charts.
双参数指数分布(TPED)常用于事件间时间数据建模。本文提出了 CUmulative SUM 和指数加权移动平均图表,用于同时监测 TPED 的参数(位置和规模)。所建议图表的一个主要特点是其直接的信号后诊断。蒙特卡罗模拟用于估算拟议图表的零态和稳态平均运行长度(ARL)曲线。根据预期加权运行长度和相对平均指数,评估了现有图表和建议图表的 ARL 性能。结果发现,拟议图表的性能优于现有图表。使用真实数据集说明了建议图表的实施情况。
{"title":"New CUSUM and EWMA charts with simple post signal diagnostics for two‐parameter exponential distribution","authors":"Waqas Munir, Abdul Haq","doi":"10.1002/qre.3636","DOIUrl":"https://doi.org/10.1002/qre.3636","url":null,"abstract":"The two‐parameter exponential distribution (TPED) is often used to model time‐between‐events data. In this paper, we propose CUmulative SUM and exponentially weighted moving average charts for simultaneously monitoring the parameters (location and scale) of the TPED. A key feature of the proposed charts is their straightforward post‐signal diagnostics. Monte Carlo simulations are used to estimate the zero‐state and steady‐state average run‐length (ARL) profiles of the proposed charts. The ARL performances of existing and proposed charts are assessed in terms of expected weighted run‐length and relative mean index. It is found that the proposed charts outperform the existing charts. A real dataset is used to illustrate the implementation of the proposed charts.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mustafa M. Hasaballah, Y. Tashkandy, O. S. Balogun, M. E. Bakr
The joint progressive Type‐II censoring scheme is an advantageous cost‐saving strategy. In this paper, investigated classical and Bayesian methodologies for estimating the combined parameters of two distinct Lomax distributions employing the joint progressive Type‐II censoring scheme. Maximum likelihood estimators have been derived, and asymptotic confidence intervals are presented. Bayesian estimates and their corresponding credible intervals are calculated, incorporating both symmetry and asymmetry loss functions through the utilization of the Markov Chain Monte Carlo (MCMC) method. The simulation aspect has employed the MCMC approximation method. Furthermore, discussed the practical application of these methods, providing illustration through the analysis of a real dataset.
联合渐进式 II 型剔除方案是一种节约成本的有利策略。本文研究了采用联合渐进式 II 型剔除方案估算两个不同洛马克斯分布组合参数的经典方法和贝叶斯方法。推导出了最大似然估计值,并给出了渐近置信区间。通过使用马尔可夫链蒙特卡罗(MCMC)方法,结合对称和不对称损失函数,计算出贝叶斯估计值及其相应的可信区间。模拟方面采用了 MCMC 近似方法。此外,还讨论了这些方法的实际应用,并通过对真实数据集的分析进行了说明。
{"title":"Bayesian inference for two populations of Lomax distribution under joint progressive Type‐II censoring schemes with engineering applications","authors":"Mustafa M. Hasaballah, Y. Tashkandy, O. S. Balogun, M. E. Bakr","doi":"10.1002/qre.3633","DOIUrl":"https://doi.org/10.1002/qre.3633","url":null,"abstract":"The joint progressive Type‐II censoring scheme is an advantageous cost‐saving strategy. In this paper, investigated classical and Bayesian methodologies for estimating the combined parameters of two distinct Lomax distributions employing the joint progressive Type‐II censoring scheme. Maximum likelihood estimators have been derived, and asymptotic confidence intervals are presented. Bayesian estimates and their corresponding credible intervals are calculated, incorporating both symmetry and asymmetry loss functions through the utilization of the Markov Chain Monte Carlo (MCMC) method. The simulation aspect has employed the MCMC approximation method. Furthermore, discussed the practical application of these methods, providing illustration through the analysis of a real dataset.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141921095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quality testing and monitoring advancements have allowed modern production processes to achieve extremely low failure rates, especially in the era of Industry 4.0. Such processes are known as high‐yield processes, and their data set consists of an excess number of zeros. Count models such as Poisson, Negative Binomial (NB), and Conway‐Maxwell‐Poisson (COM‐Poisson) are usually considered good candidates to model such data, but the excess zeros are larger than the number of zeros, which these models fit inherently. Hence, the zero‐inflated version of these count models provides better fitness of high‐quality data. Usually, linearly/non‐linearly related variables are also associated with failure rate data; hence, regression models based on zero‐inflated count models are used for model fitting. This study is designed to propose deep learning (DL) based control charts when the failure rate variables follow the zero‐inflated COM‐Poisson (ZICOM‐Poisson) distribution because DL models can detect complicated non‐linear patterns and relationships in data. Further, the proposed methods are compared with existing control charts based on neural networks, principal component analysis designed based on Poisson, NB, and zero‐inflated Poisson (ZIP) and non‐linear principal component analysis designed based on Poisson, NB, and ZIP. Using run length properties, the simulation study evaluates monitoring approaches, and a flight delay application illustrates the implementation of the research. The findings revealed that the proposed methods have outperformed all existing control charts.
质量检测和监控技术的进步使现代生产流程实现了极低的故障率,尤其是在工业 4.0 时代。这类流程被称为高产流程,其数据集由过量的零组成。泊松、负二项(NB)和康威-麦克斯韦-泊松(COM-Poisson)等计数模型通常被认为是此类数据建模的良好候选模型,但过量的零点数大于这些模型固有拟合的零点数。因此,这些计数模型的零膨胀版本能更好地拟合高质量数据。通常,线性/非线性相关变量也与故障率数据有关;因此,基于零膨胀计数模型的回归模型被用于模型拟合。本研究旨在提出基于深度学习(DL)的控制图,当故障率变量遵循零膨胀 COM-泊松(ZICOM-Poisson)分布时,因为 DL 模型可以检测数据中复杂的非线性模式和关系。此外,还将所提出的方法与现有的基于神经网络的控制图、基于泊松、NB 和零膨胀泊松(ZIP)设计的主成分分析法以及基于泊松、NB 和 ZIP 设计的非线性主成分分析法进行了比较。利用运行长度属性,模拟研究评估了监测方法,并通过航班延误应用说明了研究的实施。研究结果表明,所提出的方法优于所有现有的控制图。
{"title":"Surveillance of high‐yield processes using deep learning models","authors":"Musaddiq Ibrahim, Chunxia Zhang, Tahir Mahmood","doi":"10.1002/qre.3635","DOIUrl":"https://doi.org/10.1002/qre.3635","url":null,"abstract":"Quality testing and monitoring advancements have allowed modern production processes to achieve extremely low failure rates, especially in the era of Industry 4.0. Such processes are known as high‐yield processes, and their data set consists of an excess number of zeros. Count models such as Poisson, Negative Binomial (NB), and Conway‐Maxwell‐Poisson (COM‐Poisson) are usually considered good candidates to model such data, but the excess zeros are larger than the number of zeros, which these models fit inherently. Hence, the zero‐inflated version of these count models provides better fitness of high‐quality data. Usually, linearly/non‐linearly related variables are also associated with failure rate data; hence, regression models based on zero‐inflated count models are used for model fitting. This study is designed to propose deep learning (DL) based control charts when the failure rate variables follow the zero‐inflated COM‐Poisson (ZICOM‐Poisson) distribution because DL models can detect complicated non‐linear patterns and relationships in data. Further, the proposed methods are compared with existing control charts based on neural networks, principal component analysis designed based on Poisson, NB, and zero‐inflated Poisson (ZIP) and non‐linear principal component analysis designed based on Poisson, NB, and ZIP. Using run length properties, the simulation study evaluates monitoring approaches, and a flight delay application illustrates the implementation of the research. The findings revealed that the proposed methods have outperformed all existing control charts.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141926020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the insufficient feature learning ability and the bloated network structure, the gear fault diagnosis methods based on traditional deep neural networks always suffer from poor diagnosis accuracy and low diagnosis efficiency. Therefore, a small channel convolutional neural network under the multiscale fusion attention mechanism (MSFAM‐SCCNN) is proposed in this paper. First, a small channel convolutional neural network (SCCNN) model is constructed based on the framework of the traditional AlexNet model in order to lightweight the network structure and improve the learning efficiency. Then, a novel multiscale fusion attention mechanism (MSFAM) is embedded into the SCCNN model, which utilizes multiscale striped convolutional windows to extract key features from three dimensions, including temporal, spatial, and channel‐wise, resulting in more precise feature mining. Finally, the performance of the MSFAM‐ SCCNN model is verified using the vibration data of tooth‐broken gears obtained by a self‐designed experimental bench of an ammunition supply and delivery system.
{"title":"Gear fault diagnosis based on small channel convolutional neural network under multiscale fusion attention mechanism","authors":"Xuejiao Du, Bowen Liu, Jingbo Gai, Yulin Zhang, Xiangfeng Shi, Hailong Tian","doi":"10.1002/qre.3631","DOIUrl":"https://doi.org/10.1002/qre.3631","url":null,"abstract":"Due to the insufficient feature learning ability and the bloated network structure, the gear fault diagnosis methods based on traditional deep neural networks always suffer from poor diagnosis accuracy and low diagnosis efficiency. Therefore, a small channel convolutional neural network under the multiscale fusion attention mechanism (MSFAM‐SCCNN) is proposed in this paper. First, a small channel convolutional neural network (SCCNN) model is constructed based on the framework of the traditional AlexNet model in order to lightweight the network structure and improve the learning efficiency. Then, a novel multiscale fusion attention mechanism (MSFAM) is embedded into the SCCNN model, which utilizes multiscale striped convolutional windows to extract key features from three dimensions, including temporal, spatial, and channel‐wise, resulting in more precise feature mining. Finally, the performance of the MSFAM‐ SCCNN model is verified using the vibration data of tooth‐broken gears obtained by a self‐designed experimental bench of an ammunition supply and delivery system.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qualification is a process that demonstrates whether a product meets or exceeds specified requirements. Testing and data analysis performed within a qualification procedure should verify that products satisfy those requirements, including reliability requirements. Most of the electronics industry qualifies products using procedures dictated within qualification standards. A review of common qualification standards reveals that those standards do not consider customer requirements or the product physics‐of‐failure in that intended application. As a result, qualification, as represented in the reviewed qualification standards, would not meet our definition of qualification for reliability assessment. This paper introduces the application of diagnostics and prognostics techniques to analyze real‐time data trends while conducting qualification tests. Diagnostics techniques identify anomalous behavior exhibited by the product, and prognostics techniques forecast how the product will behave during the remainder of the qualification test and how the product would have behaved if the test continued. As a result, combining diagnostics and prognostics techniques can enable the prediction of the remaining time‐to‐failure for the product undergoing qualification. Several ancillary benefits related to an improved testing strategy, parts selection and management, and support of a prognostics and health management system in operation also arise from applying prognostics and diagnostics techniques to qualification.
{"title":"Enhancing qualification via the use of diagnostics and prognostics techniques","authors":"Abhishek Ram, Diganta Das","doi":"10.1002/qre.3634","DOIUrl":"https://doi.org/10.1002/qre.3634","url":null,"abstract":"Qualification is a process that demonstrates whether a product meets or exceeds specified requirements. Testing and data analysis performed within a qualification procedure should verify that products satisfy those requirements, including reliability requirements. Most of the electronics industry qualifies products using procedures dictated within qualification standards. A review of common qualification standards reveals that those standards do not consider customer requirements or the product physics‐of‐failure in that intended application. As a result, qualification, as represented in the reviewed qualification standards, would not meet our definition of qualification for reliability assessment. This paper introduces the application of diagnostics and prognostics techniques to analyze real‐time data trends while conducting qualification tests. Diagnostics techniques identify anomalous behavior exhibited by the product, and prognostics techniques forecast how the product will behave during the remainder of the qualification test and how the product would have behaved if the test continued. As a result, combining diagnostics and prognostics techniques can enable the prediction of the remaining time‐to‐failure for the product undergoing qualification. Several ancillary benefits related to an improved testing strategy, parts selection and management, and support of a prognostics and health management system in operation also arise from applying prognostics and diagnostics techniques to qualification.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The assessment of rolling bearing reliability is vital for ensuring mechanical operational safety and minimizing maintenance costs. Due to the difficulty in obtaining data on the performance degradation and failure time of rolling bearings, traditional methods for reliability assessment are challenged. This paper introduces a novel hybrid method for the reliability assessment of rolling bearings, combining the convolutional neural network (CNN)‐convolutional block attention module (CBAM)‐ bidirectional long short‐term memory (BiLSTM) network with the Wiener process. The approach comprises three distinct stages: Initially, it involves acquiring two‐dimensional time‐frequency representations of bearings at various operational phases using Continuous Wavelet Transform. Subsequently, the CNN‐CBAM‐BiLSTM network is employed to establish health index (HI) for the bearings and to facilitate the extraction of deep features, serving as input for the Wiener process. The final stage applies the Wiener process to evaluate the bearings’ reliability, characterizing the HI and quantifying uncertainties. The experiment is performed on bearing degradation data and the results indicate the effectiveness and superiority of the proposed hybrid method.
{"title":"A hybrid reliability assessment method based on health index construction and reliability modeling for rolling bearing","authors":"Yuan‐Jian Yang, Chengyuan Ma, Gui‐Hua Liu, Hao Lu, Le Dai, Jia‐Lun Wan, Junyu Guo","doi":"10.1002/qre.3630","DOIUrl":"https://doi.org/10.1002/qre.3630","url":null,"abstract":"The assessment of rolling bearing reliability is vital for ensuring mechanical operational safety and minimizing maintenance costs. Due to the difficulty in obtaining data on the performance degradation and failure time of rolling bearings, traditional methods for reliability assessment are challenged. This paper introduces a novel hybrid method for the reliability assessment of rolling bearings, combining the convolutional neural network (CNN)‐convolutional block attention module (CBAM)‐ bidirectional long short‐term memory (BiLSTM) network with the Wiener process. The approach comprises three distinct stages: Initially, it involves acquiring two‐dimensional time‐frequency representations of bearings at various operational phases using Continuous Wavelet Transform. Subsequently, the CNN‐CBAM‐BiLSTM network is employed to establish health index (HI) for the bearings and to facilitate the extraction of deep features, serving as input for the Wiener process. The final stage applies the Wiener process to evaluate the bearings’ reliability, characterizing the HI and quantifying uncertainties. The experiment is performed on bearing degradation data and the results indicate the effectiveness and superiority of the proposed hybrid method.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software reliability is a critical factor in assessing the health of software and identifying defects. Software reliability growth models (SRGM) are used to estimate the occurrence of software faults. There are various parameterized and non‐parameterized models of SRGM. These models effectively predict fault occurrence for limited testing conditions. To resolve this problem various neural and artificial neural network (ANN) models are proposed. A problem while using ANN is over‐fitting and under‐fitting. Non‐autoregressive time series models, including ANN variants, offer promising solutions to address under‐fitting issues in SRGM, providing enhanced predictive capabilities for fault occurrence across diverse testing conditions. This study proposes a modified version with a Bayesian regularization technique to address over‐fitting. This modification aims to enhance the suitability of the Bayesian regularization framework for nonlinear autoregressive (NAR) models by carefully adjusting regularization parameters. Comprehensive testing with real‐world software failure datasets is conducted to evaluate the effectiveness of the proposed approach. The results demonstrate that our modified approach improved generalization capabilities and increased prediction accuracy. The NAR‐ANN model exhibits a lower mean squared error of 0.12935 and a higher value of 0.99853.
{"title":"Advancing software reliability with time series insights: A non‐autoregressive ANN approach","authors":"Shiv Kumar Sharma, Rohit Kumar Rana","doi":"10.1002/qre.3632","DOIUrl":"https://doi.org/10.1002/qre.3632","url":null,"abstract":"Software reliability is a critical factor in assessing the health of software and identifying defects. Software reliability growth models (SRGM) are used to estimate the occurrence of software faults. There are various parameterized and non‐parameterized models of SRGM. These models effectively predict fault occurrence for limited testing conditions. To resolve this problem various neural and artificial neural network (ANN) models are proposed. A problem while using ANN is over‐fitting and under‐fitting. Non‐autoregressive time series models, including ANN variants, offer promising solutions to address under‐fitting issues in SRGM, providing enhanced predictive capabilities for fault occurrence across diverse testing conditions. This study proposes a modified version with a Bayesian regularization technique to address over‐fitting. This modification aims to enhance the suitability of the Bayesian regularization framework for nonlinear autoregressive (NAR) models by carefully adjusting regularization parameters. Comprehensive testing with real‐world software failure datasets is conducted to evaluate the effectiveness of the proposed approach. The results demonstrate that our modified approach improved generalization capabilities and increased prediction accuracy. The NAR‐ANN model exhibits a lower mean squared error of 0.12935 and a higher value of 0.99853.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141882349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vaibhav N. Dhameliya, Raj Kamal Maurya, Ritwik Bhattacharya
In many research studies, multiple objectives need to be considered simultaneously to ensure an effective and efficient investigation. A compound optimal design provides a viable solution to this problem, allowing for the maximization of overall benefits through the integration of several factors. The paper addresses the application of compound optimal designs in the context of progressive first‐failure censoring, with a particular focus on the Generalized Exponential distribution with two parameters. The paper provides an illustrative example of compound designs by considering the cost function along with trace, variance, and determinant of inverse Fisher information. The best design is determined using a graphical solution technique that is both comprehensible and precise. Using a simple example, we demonstrate the advantage of compound optimal designs over constraint optimal designs. Furthermore, the paper examines real‐world data collection to demonstrate the practical utility of compound optimal designs.
{"title":"Implementation of compound optimal design in progressive first‐failure censored data","authors":"Vaibhav N. Dhameliya, Raj Kamal Maurya, Ritwik Bhattacharya","doi":"10.1002/qre.3628","DOIUrl":"https://doi.org/10.1002/qre.3628","url":null,"abstract":"In many research studies, multiple objectives need to be considered simultaneously to ensure an effective and efficient investigation. A compound optimal design provides a viable solution to this problem, allowing for the maximization of overall benefits through the integration of several factors. The paper addresses the application of compound optimal designs in the context of progressive first‐failure censoring, with a particular focus on the Generalized Exponential distribution with two parameters. The paper provides an illustrative example of compound designs by considering the cost function along with trace, variance, and determinant of inverse Fisher information. The best design is determined using a graphical solution technique that is both comprehensible and precise. Using a simple example, we demonstrate the advantage of compound optimal designs over constraint optimal designs. Furthermore, the paper examines real‐world data collection to demonstrate the practical utility of compound optimal designs.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianxing Wang, Yan‐Feng Li, Hong‐Zhong Huang, Song Bai
This study utilizes the rank correlation coefficient to examine the multi‐site failure correlation of turbine discs. Drawing from the stress‐strength interference model, reliability models both with and without factoring in the multi‐site failure correlation are constructed. Furthermore, the weakest link theory (WLT) within the context of the Weibull distribution function is invoked to develop a model for predicting the fatigue life of turbine discs, taking into account the quantity of critical sections. The variability in the low cycle fatigue (LCF) of turbine discs is scrutinized, leading to the formulation of a probabilistic fatigue life prediction method for these discs. When comparing theoretical values with experimental ones, it becomes evident that factoring in the multi‐site failure correlation significantly enhances the accuracy of turbine disc life predictions.
{"title":"A weakest link theory‐based probabilistic fatigue life prediction method for the turbine disc considering the influence of the number of critical sections","authors":"Tianxing Wang, Yan‐Feng Li, Hong‐Zhong Huang, Song Bai","doi":"10.1002/qre.3629","DOIUrl":"https://doi.org/10.1002/qre.3629","url":null,"abstract":"This study utilizes the rank correlation coefficient to examine the multi‐site failure correlation of turbine discs. Drawing from the stress‐strength interference model, reliability models both with and without factoring in the multi‐site failure correlation are constructed. Furthermore, the weakest link theory (WLT) within the context of the Weibull distribution function is invoked to develop a model for predicting the fatigue life of turbine discs, taking into account the quantity of critical sections. The variability in the low cycle fatigue (LCF) of turbine discs is scrutinized, leading to the formulation of a probabilistic fatigue life prediction method for these discs. When comparing theoretical values with experimental ones, it becomes evident that factoring in the multi‐site failure correlation significantly enhances the accuracy of turbine disc life predictions.","PeriodicalId":56088,"journal":{"name":"Quality and Reliability Engineering International","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}