In this paper, two methods of control chart were proposed to monitor the process based on the two-parameter Gompertz distribution. The proposed methods are the Gompertz Shewhart approach and Gompertz skewness correction method. A simulation study was conducted to compare the performance of the proposed chart with that of the skewness correction approach for various sample sizes. Furthermore, real-life data on thickness of paint on refrigerators which are nonnormal data that have attributes of a Gompertz distribution were used to illustrate the proposed control chart. The coverage probability (CP), control limit interval (CLI), and average run length (ARL) were used to measure the performance of the two methods. It was found that the Gompertz exact method where the control limits are calculated through the percentiles of the underline distribution has the highest coverage probability, while the Gompertz Shewhart approach and Gompertz skewness correction method have the least CLI and ARL. Hence, the two-parameter Gompertz-based methods would detect out-of-control faster for Gompertz-based charts.
{"title":"On Performance of Two-Parameter Gompertz-Based X¯ Control Charts","authors":"J. Adewara, Kayode S. Adekeye, O. L. Aako","doi":"10.1155/2020/8081507","DOIUrl":"https://doi.org/10.1155/2020/8081507","url":null,"abstract":"In this paper, two methods of control chart were proposed to monitor the process based on the two-parameter Gompertz distribution. The proposed methods are the Gompertz Shewhart approach and Gompertz skewness correction method. A simulation study was conducted to compare the performance of the proposed chart with that of the skewness correction approach for various sample sizes. Furthermore, real-life data on thickness of paint on refrigerators which are nonnormal data that have attributes of a Gompertz distribution were used to illustrate the proposed control chart. The coverage probability (CP), control limit interval (CLI), and average run length (ARL) were used to measure the performance of the two methods. It was found that the Gompertz exact method where the control limits are calculated through the percentiles of the underline distribution has the highest coverage probability, while the Gompertz Shewhart approach and Gompertz skewness correction method have the least CLI and ARL. Hence, the two-parameter Gompertz-based methods would detect out-of-control faster for Gompertz-based charts.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8081507","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48783797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Weibull growth model is an important model especially for describing the growth instability; therefore, in this paper, three methods, namely, generalized maximum entropy, Bayes, and maximum a posteriori, for estimating the four parameter Weibull growth model have been presented and compared. To achieve this aim, it is necessary to use a simulation technique to generate the samples and perform the required comparisons, using varying sample sizes (10, 12, 15, 20, 25, and 30) and models depending on the standard deviation (0.5). It has been shown from the computational results that the Bayes method gives the best estimates.
{"title":"Comparative Study between Generalized Maximum Entropy and Bayes Methods to Estimate the Four Parameter Weibull Growth Model","authors":"Saifaldin Hashim Kamar, Basim Shlaibah Msallam","doi":"10.1155/2020/7967345","DOIUrl":"https://doi.org/10.1155/2020/7967345","url":null,"abstract":"The Weibull growth model is an important model especially for describing the growth instability; therefore, in this paper, three methods, namely, generalized maximum entropy, Bayes, and maximum a posteriori, for estimating the four parameter Weibull growth model have been presented and compared. To achieve this aim, it is necessary to use a simulation technique to generate the samples and perform the required comparisons, using varying sample sizes (10, 12, 15, 20, 25, and 30) and models depending on the standard deviation (0.5). It has been shown from the computational results that the Bayes method gives the best estimates.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/7967345","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43311273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silvia Herrera Cortés, Bulmaro Juárez Hernández, Víctor Hugo Vázquez Guevara, H. A. Cruz Suárez
In this paper, comparison results of parametric methodologies of change points, applied to maximum temperature records from the municipality of Tlaxco, Tlaxcala, Mexico, are presented. Methodologies considered are likelihood ratio test, score test, and binary segmentation (BS), pruned exact linear time (PELT), and segment neighborhood (SN). In order to compare such methodologies, a quality analysis of the data was performed; in addition, lost data were estimated with linear regression, and finally, SARIMA models were adjusted.
{"title":"Parametric Methodologies for Detecting Changes in Maximum Temperature of Tlaxco, Tlaxcala, México","authors":"Silvia Herrera Cortés, Bulmaro Juárez Hernández, Víctor Hugo Vázquez Guevara, H. A. Cruz Suárez","doi":"10.1155/2019/3580692","DOIUrl":"https://doi.org/10.1155/2019/3580692","url":null,"abstract":"In this paper, comparison results of parametric methodologies of change points, applied to maximum temperature records from the municipality of Tlaxco, Tlaxcala, Mexico, are presented. Methodologies considered are likelihood ratio test, score test, and binary segmentation (BS), pruned exact linear time (PELT), and segment neighborhood (SN). In order to compare such methodologies, a quality analysis of the data was performed; in addition, lost data were estimated with linear regression, and finally, SARIMA models were adjusted.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/3580692","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49364229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Partial least squares (PLS) regression is an alternative to the ordinary least squares (OLS) regression, used in the presence of multicollinearity. As with any other modelling method, PLS regression requires a reliable model selection tool. Cross validation (CV) is the most commonly used tool with many advantages in both preciseness and accuracy, but it also has some drawbacks; therefore, we will use L-curve criterion as an alternative, given that it takes into consideration the shrinking nature of PLS. A theoretical justification for the use of L-curve criterion is presented as well as an application on both simulated and real data. The application shows how this criterion generally outperforms cross validation and generalized cross validation (GCV) in mean squared prediction error and computational efficiency.
{"title":"The L-Curve Criterion as a Model Selection Tool in PLS Regression","authors":"Abdelmounaim Kerkri, J. Allal, Zoubir Zarrouk","doi":"10.1155/2019/3129769","DOIUrl":"https://doi.org/10.1155/2019/3129769","url":null,"abstract":"Partial least squares (PLS) regression is an alternative to the ordinary least squares (OLS) regression, used in the presence of multicollinearity. As with any other modelling method, PLS regression requires a reliable model selection tool. Cross validation (CV) is the most commonly used tool with many advantages in both preciseness and accuracy, but it also has some drawbacks; therefore, we will use L-curve criterion as an alternative, given that it takes into consideration the shrinking nature of PLS. A theoretical justification for the use of L-curve criterion is presented as well as an application on both simulated and real data. The application shows how this criterion generally outperforms cross validation and generalized cross validation (GCV) in mean squared prediction error and computational efficiency.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/3129769","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41844531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A simple solution to determine the distributions of queue-lengths at different observation epochs for the model GIX/Geo/c is presented. In the past, various discrete-time queueing models, particularly the multiserver bulk-arrival queues, have been solved using complicated methods that lead to incomplete results. The purpose of this paper is to use the roots method to solve the model GIX/Geo/c that leads to a result that is analytically elegant and computationally efficient. This method works well even for the case when the inter-batch-arrival times follow heavy-tailed distributions. The roots of the underlying characteristic equation form the basis for all distributions of queue-lengths at different time epochs.
{"title":"Analytically Simple and Computationally Efficient Results for the GIX/Geo/c Queues","authors":"M. Chaudhry, James J. Kim, A. Banik","doi":"10.1155/2019/6480139","DOIUrl":"https://doi.org/10.1155/2019/6480139","url":null,"abstract":"A simple solution to determine the distributions of queue-lengths at different observation epochs for the model GIX/Geo/c is presented. In the past, various discrete-time queueing models, particularly the multiserver bulk-arrival queues, have been solved using complicated methods that lead to incomplete results. The purpose of this paper is to use the roots method to solve the model GIX/Geo/c that leads to a result that is analytically elegant and computationally efficient. This method works well even for the case when the inter-batch-arrival times follow heavy-tailed distributions. The roots of the underlying characteristic equation form the basis for all distributions of queue-lengths at different time epochs.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/6480139","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41684596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents analytically explicit results for the distribution of the number of customers served during a busy period for special cases of the M/G/1 queues when initiated with m customers. The functional equation for the Laplace transform of the number of customers served during a busy period is widely known, but several researchers state that, in general, it is not easy to invert it except for some simple cases such as M/M/1 and M/D/1 queues. Using the Lagrange inversion theorem, we give an elegant solution to this equation. We obtain the distribution of the number of customers served during a busy period for various service-time distributions such as exponential, deterministic, Erlang-k, gamma, chi-square, inverse Gaussian, generalized Erlang, matrix exponential, hyperexponential, uniform, Coxian, phase-type, Markov-modulated Poisson process, and interrupted Poisson process. Further, we also provide computational results using our method. The derivations are very fast and robust due to the lucidity of the expressions.
{"title":"Analytically Explicit Results for the Distribution of the Number of Customers Served during a Busy Period for Special Cases of the M/G/1 Queue","authors":"M. Chaudhry, V. Goswami","doi":"10.1155/2019/7398658","DOIUrl":"https://doi.org/10.1155/2019/7398658","url":null,"abstract":"This paper presents analytically explicit results for the distribution of the number of customers served during a busy period for special cases of the M/G/1 queues when initiated with m customers. The functional equation for the Laplace transform of the number of customers served during a busy period is widely known, but several researchers state that, in general, it is not easy to invert it except for some simple cases such as M/M/1 and M/D/1 queues. Using the Lagrange inversion theorem, we give an elegant solution to this equation. We obtain the distribution of the number of customers served during a busy period for various service-time distributions such as exponential, deterministic, Erlang-k, gamma, chi-square, inverse Gaussian, generalized Erlang, matrix exponential, hyperexponential, uniform, Coxian, phase-type, Markov-modulated Poisson process, and interrupted Poisson process. Further, we also provide computational results using our method. The derivations are very fast and robust due to the lucidity of the expressions.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/7398658","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44949057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We obtain weak convergence and optimal scaling results for the random walk Metropolis algorithm with a Gaussian proposal distribution. The sampler is applied to hierarchical target distributions, which form the building block of many Bayesian analyses. The global asymptotically optimal proposal variance derived may be computed as a function of the specific target distribution considered. We also introduce the concept of locally optimal tunings, i.e., tunings that depend on the current position of the Markov chain. The theorems are proved by studying the generator of the first and second components of the algorithm and verifying their convergence to the generator of a modified RWM algorithm and a diffusion process, respectively. The rate at which the algorithm explores its state space is optimized by studying the speed measure of the limiting diffusion process. We illustrate the theory with two examples. Applications of these results on simulated and real data are also presented.
{"title":"Hierarchical Models and Tuning of Random Walk Metropolis Algorithms","authors":"M. Bédard","doi":"10.1155/2019/8740426","DOIUrl":"https://doi.org/10.1155/2019/8740426","url":null,"abstract":"We obtain weak convergence and optimal scaling results for the random walk Metropolis algorithm with a Gaussian proposal distribution. The sampler is applied to hierarchical target distributions, which form the building block of many Bayesian analyses. The global asymptotically optimal proposal variance derived may be computed as a function of the specific target distribution considered. We also introduce the concept of locally optimal tunings, i.e., tunings that depend on the current position of the Markov chain. The theorems are proved by studying the generator of the first and second components of the algorithm and verifying their convergence to the generator of a modified RWM algorithm and a diffusion process, respectively. The rate at which the algorithm explores its state space is optimized by studying the speed measure of the limiting diffusion process. We illustrate the theory with two examples. Applications of these results on simulated and real data are also presented.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/8740426","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44778141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rahim Alhamzawi, Keming Yu, Himel Mallick, PhD, FASA
Regression is used to quantify the relationship between response variables and some covariates of interest. Standard mean regression has been one of the most applied statistical methods formany decades. It aims to estimate the conditional expectation of the response variable given the covariates. However, quantile regression is desired if conditional quantile functions such as median regression are of interest. Quantile regression has emerged as a useful supplement to standard mean regression. Also, unlike mean regression, quantile regression is robust to outliers in observations and makes very minimal assumptions on the error distribution and thus is able to accommodate nonnormal errors. e value of “going beyond the standard mean regression” has been illustrated in many scientific subjects including economics, ecology, education, finance, survival analysis, microarray study, growth charts, and so on. In addition, inference on quantiles can accommodate transformation of the outcome of the interest without the problems encountered in standard mean regression. Overall, quantile regression offers a more complete statisticalmodel than standardmean regression and now has widespread applications. ere has been a great deal of recent interest in Bayesian approaches to quantile regression models and the applications of these models. In these approaches, uncertain parameters are assigned prior distributions based on expert judgment and updated using observations through the Bayes formula to obtain posterior probability distributions. In this special issue on “Quantile regression and beyond in statistical analysis of data,” we have invited a few papers that address such issues. e first paper of this special issue addresses a fully Bayesian approach that estimates multiple quantile levels simultaneously in one step by using the asymmetric Laplace distribution for the errors, which can be viewed as a mixture of an exponential and a scaled normal distribution. is method enables characterizing the likelihood function by all quantile levels of interest using the relation between two distinct quantile levels. e second paper presents a new link function for distribution–specific quantile regression based on vector generalized linear and additive models to directly model specified quantile levels. e third paper presents a novel modeling approach to study the effect of predictors of various types on the conditional distribution of the response variable. e fourth paper introduces the regularized quantile regression method using pairwise absolute clustering and sparsity penalty, extending from mean regression to quantile regression setting. e final paper of this special issue uses Bayesian quantile regression for studying the retirement consumption puzzle, which is defined as the drop in consumption upon retirement, using the cross-sectional data of the Malaysian Household Expenditure Survey 2009/2010.
{"title":"Quantile Regression and Beyond in Statistical Analysis of Data","authors":"Rahim Alhamzawi, Keming Yu, Himel Mallick, PhD, FASA","doi":"10.1155/2019/2635306","DOIUrl":"https://doi.org/10.1155/2019/2635306","url":null,"abstract":"Regression is used to quantify the relationship between response variables and some covariates of interest. Standard mean regression has been one of the most applied statistical methods formany decades. It aims to estimate the conditional expectation of the response variable given the covariates. However, quantile regression is desired if conditional quantile functions such as median regression are of interest. Quantile regression has emerged as a useful supplement to standard mean regression. Also, unlike mean regression, quantile regression is robust to outliers in observations and makes very minimal assumptions on the error distribution and thus is able to accommodate nonnormal errors. e value of “going beyond the standard mean regression” has been illustrated in many scientific subjects including economics, ecology, education, finance, survival analysis, microarray study, growth charts, and so on. In addition, inference on quantiles can accommodate transformation of the outcome of the interest without the problems encountered in standard mean regression. Overall, quantile regression offers a more complete statisticalmodel than standardmean regression and now has widespread applications. ere has been a great deal of recent interest in Bayesian approaches to quantile regression models and the applications of these models. In these approaches, uncertain parameters are assigned prior distributions based on expert judgment and updated using observations through the Bayes formula to obtain posterior probability distributions. In this special issue on “Quantile regression and beyond in statistical analysis of data,” we have invited a few papers that address such issues. e first paper of this special issue addresses a fully Bayesian approach that estimates multiple quantile levels simultaneously in one step by using the asymmetric Laplace distribution for the errors, which can be viewed as a mixture of an exponential and a scaled normal distribution. is method enables characterizing the likelihood function by all quantile levels of interest using the relation between two distinct quantile levels. e second paper presents a new link function for distribution–specific quantile regression based on vector generalized linear and additive models to directly model specified quantile levels. e third paper presents a novel modeling approach to study the effect of predictors of various types on the conditional distribution of the response variable. e fourth paper introduces the regularized quantile regression method using pairwise absolute clustering and sparsity penalty, extending from mean regression to quantile regression setting. e final paper of this special issue uses Bayesian quantile regression for studying the retirement consumption puzzle, which is defined as the drop in consumption upon retirement, using the cross-sectional data of the Malaysian Household Expenditure Survey 2009/2010.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/2635306","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43380034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In any longitudinal study, a dropout before the final timepoint can rarely be avoided. The chosen dropout model is commonly one of these types: Missing Completely at Random (MCAR), Missing at Random (MAR), Missing Not at Random (MNAR), and Shared Parameter (SP). In this paper we estimate the parameters of the longitudinal model for simulated data and real data using the Linear Mixed Effect (LME) method. We investigate the consequences of misspecifying the missingness mechanism by deriving the so-called least false values. These are the values the parameter estimates converge to, when the assumptions may be wrong. The knowledge of the least false values allows us to conduct a sensitivity analysis, which is illustrated. This method provides an alternative to a local misspecification sensitivity procedure, which has been developed for likelihood-based analysis. We compare the results obtained by the method proposed with the results found by using the local misspecification method. We apply the local misspecification and least false methods to estimate the bias and sensitivity of parameter estimates for a clinical trial example.
{"title":"An Alternative Sensitivity Approach for Longitudinal Analysis with Dropout","authors":"A. Almohisen, R. Henderson, Arwa M. Alshingiti","doi":"10.1155/2019/1019303","DOIUrl":"https://doi.org/10.1155/2019/1019303","url":null,"abstract":"In any longitudinal study, a dropout before the final timepoint can rarely be avoided. The chosen dropout model is commonly one of these types: Missing Completely at Random (MCAR), Missing at Random (MAR), Missing Not at Random (MNAR), and Shared Parameter (SP). In this paper we estimate the parameters of the longitudinal model for simulated data and real data using the Linear Mixed Effect (LME) method. We investigate the consequences of misspecifying the missingness mechanism by deriving the so-called least false values. These are the values the parameter estimates converge to, when the assumptions may be wrong. The knowledge of the least false values allows us to conduct a sensitivity analysis, which is illustrated. This method provides an alternative to a local misspecification sensitivity procedure, which has been developed for likelihood-based analysis. We compare the results obtained by the method proposed with the results found by using the local misspecification method. We apply the local misspecification and least false methods to estimate the bias and sensitivity of parameter estimates for a clinical trial example.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/1019303","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42407745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rashid Mehmood, Muhammed Hisyam Lee, M. Riaz, Iftikhar Ali
Different versions of X- control chart structure are available under various ranked set strategies. In these control charts, computation of performance measures was carried out through Monte Carlo simulation method (MCSM). In this article, we have defined a generalized structure of X- control charts under variant sampling strategies followed by derivation of their different performance measures. For the derivation of different performance measures, we have proposed pivotal quantity. For comparative analysis, we have presented results of generalized performance measures by involving numerical method (NM) as computation. We found that values of generalized performance measures based on NM are almost similar to values of performance measures based on MCSM. Also, NM is time efficient and can be considered as an alternative of MCSM.
{"title":"Generalized Performance Measures of X- Control Charts Based on Different Sampling Schemes","authors":"Rashid Mehmood, Muhammed Hisyam Lee, M. Riaz, Iftikhar Ali","doi":"10.1155/2019/5269357","DOIUrl":"https://doi.org/10.1155/2019/5269357","url":null,"abstract":"Different versions of X- control chart structure are available under various ranked set strategies. In these control charts, computation of performance measures was carried out through Monte Carlo simulation method (MCSM). In this article, we have defined a generalized structure of X- control charts under variant sampling strategies followed by derivation of their different performance measures. For the derivation of different performance measures, we have proposed pivotal quantity. For comparative analysis, we have presented results of generalized performance measures by involving numerical method (NM) as computation. We found that values of generalized performance measures based on NM are almost similar to values of performance measures based on MCSM. Also, NM is time efficient and can be considered as an alternative of MCSM.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/5269357","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49014488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}