This paper proposes the use of the statistics of similarity values to evaluate the clusterability or structuredness associated with a cell formation (CF) problem. Typically, the structuredness of a CF solution cannot be known until the CF problem is solved. In this context, this paper investigates the similarity statistics of machine pairs to estimate the potential structuredness of a given CF problem without solving it. One key observation is that a well-structured CF solution matrix has a relatively high percentage of high-similarity machine pairs. Then, histograms are used as a statistical tool to study the statistical distributions of similarity values. This study leads to the development of the U-shape criteria and the criterion based on the Kolmogorov-Smirnov test. Accordingly, a procedure is developed to classify whether an input CF problem can potentially lead to a well-structured or ill-structured CF matrix. In the numerical study, 20 matrices were initially used to determine the threshold values of the criteria, and 40 additional matrices were used to verify the results. Further, these matrix examples show that genetic algorithm cannot effectively improve the well-structured CF solutions (of high grouping efficacy values) that are obtained by hierarchical clustering (as one type of heuristics). This result supports the relevance of similarity statistics to preexamine an input CF problem instance and suggest a proper solution approach for problem solving.
{"title":"Similarity Statistics for Clusterability Analysis with the Application of Cell Formation Problem","authors":"Yingyu Zhu, Simon Li","doi":"10.1155/2018/1348147","DOIUrl":"https://doi.org/10.1155/2018/1348147","url":null,"abstract":"This paper proposes the use of the statistics of similarity values to evaluate the clusterability or structuredness associated with a cell formation (CF) problem. Typically, the structuredness of a CF solution cannot be known until the CF problem is solved. In this context, this paper investigates the similarity statistics of machine pairs to estimate the potential structuredness of a given CF problem without solving it. One key observation is that a well-structured CF solution matrix has a relatively high percentage of high-similarity machine pairs. Then, histograms are used as a statistical tool to study the statistical distributions of similarity values. This study leads to the development of the U-shape criteria and the criterion based on the Kolmogorov-Smirnov test. Accordingly, a procedure is developed to classify whether an input CF problem can potentially lead to a well-structured or ill-structured CF matrix. In the numerical study, 20 matrices were initially used to determine the threshold values of the criteria, and 40 additional matrices were used to verify the results. Further, these matrix examples show that genetic algorithm cannot effectively improve the well-structured CF solutions (of high grouping efficacy values) that are obtained by hierarchical clustering (as one type of heuristics). This result supports the relevance of similarity statistics to preexamine an input CF problem instance and suggest a proper solution approach for problem solving.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2018/1348147","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47655848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The need to develop generalizations of existing statistical distributions to make them more flexible in modeling real data sets is vital in parametric statistical modeling and inference. Thus, this study develops a new class of distributions called the extended odd Fréchet family of distributions for modifying existing standard distributions. Two special models named the extended odd Fréchet Nadarajah-Haghighi and extended odd Fréchet Weibull distributions are proposed using the developed family. The densities and the hazard rate functions of the two special distributions exhibit different kinds of monotonic and nonmonotonic shapes. The maximum likelihood method is used to develop estimators for the parameters of the new class of distributions. The application of the special distributions is illustrated by means of a real data set. The results revealed that the special distributions developed from the new family can provide reasonable parametric fit to the given data set compared to other existing distributions.
{"title":"Extended Odd Fréchet-G Family of Distributions","authors":"Suleman Nasiru","doi":"10.1155/2018/2931326","DOIUrl":"https://doi.org/10.1155/2018/2931326","url":null,"abstract":"The need to develop generalizations of existing statistical distributions to make them more flexible in modeling real data sets is vital in parametric statistical modeling and inference. Thus, this study develops a new class of distributions called the extended odd Fréchet family of distributions for modifying existing standard distributions. Two special models named the extended odd Fréchet Nadarajah-Haghighi and extended odd Fréchet Weibull distributions are proposed using the developed family. The densities and the hazard rate functions of the two special distributions exhibit different kinds of monotonic and nonmonotonic shapes. The maximum likelihood method is used to develop estimators for the parameters of the new class of distributions. The application of the special distributions is illustrated by means of a real data set. The results revealed that the special distributions developed from the new family can provide reasonable parametric fit to the given data set compared to other existing distributions.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2018/2931326","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44615013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new approach towards probabilistic proof of the convergence of the Collatz conjecture is described via identifying a sequential correlation of even natural numbers by divisions by 2 that follows a recurrent pattern of the form x,1,x,1…, where x represents divisions by 2 more than once. The sequence presents a probability of 50:50 of division by 2 more than once as opposed to division by 2 once over the even natural numbers. The sequence also gives the same 50:50 probability of consecutive Collatz even elements when counted for division by 2 more than once as opposed to division by 2 once and a ratio of 3:1. Considering Collatz function producing random numbers and over sufficient number of iterations, this probability distribution produces numbers in descending order that lead to the convergence of the Collatz function to 1, assuming that the only cycle of the function is 1-4-2-1.
{"title":"On the Probabilistic Proof of the Convergence of the Collatz Conjecture","authors":"K. Barghout","doi":"10.1155/2019/6814378","DOIUrl":"https://doi.org/10.1155/2019/6814378","url":null,"abstract":"A new approach towards probabilistic proof of the convergence of the Collatz conjecture is described via identifying a sequential correlation of even natural numbers by divisions by 2 that follows a recurrent pattern of the form x,1,x,1…, where x represents divisions by 2 more than once. The sequence presents a probability of 50:50 of division by 2 more than once as opposed to division by 2 once over the even natural numbers. The sequence also gives the same 50:50 probability of consecutive Collatz even elements when counted for division by 2 more than once as opposed to division by 2 once and a ratio of 3:1. Considering Collatz function producing random numbers and over sufficient number of iterations, this probability distribution produces numbers in descending order that lead to the convergence of the Collatz function to 1, assuming that the only cycle of the function is 1-4-2-1.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/6814378","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47987333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. A. Cortés, David Elal-Olivero, Juan F. Olivares-Pacheco
In this study, we present a new family of distributions through generalization of the extended bimodal-normal distribution. This family includes several special cases, like the normal, Birnbaum-Saunders, Student’s t, and Laplace distribution, that are developed and defined using stochastic representation. The theoretical properties are derived, and easily implemented Monte Carlo simulation schemes are presented. An inferential study is performed for the Laplace distribution. We end with an illustration of two real data sets.
{"title":"A New Class of Distributions Generated by the Extended Bimodal-Normal Distribution","authors":"M. A. Cortés, David Elal-Olivero, Juan F. Olivares-Pacheco","doi":"10.1155/2018/9753439","DOIUrl":"https://doi.org/10.1155/2018/9753439","url":null,"abstract":"In this study, we present a new family of distributions through generalization of the extended bimodal-normal distribution. This family includes several special cases, like the normal, Birnbaum-Saunders, Student’s t, and Laplace distribution, that are developed and defined using stochastic representation. The theoretical properties are derived, and easily implemented Monte Carlo simulation schemes are presented. An inferential study is performed for the Laplace distribution. We end with an illustration of two real data sets.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2018/9753439","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44837644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heterogeneity between individuals has attracted attention in the literature of survival analysis for several decades. Widowed individuals also differ; some are more frail than others and thereby have a higher risk of dying. The traditional hazard rate in a survival model is a measure of population risk and does not provide direct information on the unobservable individual risk. A frailty model is developed and applied on a large Norwegian data set of 452 788 widowed individuals. The model seemed to fit the data well, for both widowers and widows in all age groups. The random frailty term in the model is significant, meaning that widowed persons may have individual hazard rates.
{"title":"Frailty in Survival Analysis of Widowhood Mortality","authors":"E. Ytterstad","doi":"10.1155/2018/2378798","DOIUrl":"https://doi.org/10.1155/2018/2378798","url":null,"abstract":"Heterogeneity between individuals has attracted attention in the literature of survival analysis for several decades. Widowed individuals also differ; some are more frail than others and thereby have a higher risk of dying. The traditional hazard rate in a survival model is a measure of population risk and does not provide direct information on the unobservable individual risk. A frailty model is developed and applied on a large Norwegian data set of 452 788 widowed individuals. The model seemed to fit the data well, for both widowers and widows in all age groups. The random frailty term in the model is significant, meaning that widowed persons may have individual hazard rates.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2018-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2018/2378798","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48954238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hina Khan, Saleh Farooq, M. Aslam, Masood Amjad Khan
This study proposes EWMA-type control charts by considering some auxiliary information. The ratio estimation technique for the mean with ranked set sampling design is used in designing the control structure of the proposed charts. We have developed EWMA control charts using two exponential ratio-type estimators based on ranked set sampling for the process mean to obtain specific ARLs, being suitable when small process shifts are of interest.
{"title":"Exponentially Weighted Moving Average Control Charts for the Process Mean Using Exponential Ratio Type Estimator","authors":"Hina Khan, Saleh Farooq, M. Aslam, Masood Amjad Khan","doi":"10.1155/2018/9413939","DOIUrl":"https://doi.org/10.1155/2018/9413939","url":null,"abstract":"This study proposes EWMA-type control charts by considering some auxiliary information. The ratio estimation technique for the mean with ranked set sampling design is used in designing the control structure of the proposed charts. We have developed EWMA control charts using two exponential ratio-type estimators based on ranked set sampling for the process mean to obtain specific ARLs, being suitable when small process shifts are of interest.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2018/9413939","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46648136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A simple and efficient empirical likelihood ratio (ELR) test for normality based on moment constraints of the half-normal distribution was developed. The proposed test can also be easily modified to test for departures from half-normality and is relatively simple to implement in various statistical packages with no ordering of observations required. Using Monte Carlo simulations, our test proved to be superior to other well-known existing goodness-of-fit (GoF) tests considered under symmetric alternative distributions for small to moderate sample sizes. A real data example revealed the robustness and applicability of the proposed test as well as its superiority in power over other common existing tests studied.
{"title":"A Simple Empirical Likelihood Ratio Test for Normality Based on the Moment Constraints of a Half-Normal Distribution","authors":"C. Marange, Y. Qin","doi":"10.1155/2018/8094146","DOIUrl":"https://doi.org/10.1155/2018/8094146","url":null,"abstract":"A simple and efficient empirical likelihood ratio (ELR) test for normality based on moment constraints of the half-normal distribution was developed. The proposed test can also be easily modified to test for departures from half-normality and is relatively simple to implement in various statistical packages with no ordering of observations required. Using Monte Carlo simulations, our test proved to be superior to other well-known existing goodness-of-fit (GoF) tests considered under symmetric alternative distributions for small to moderate sample sizes. A real data example revealed the robustness and applicability of the proposed test as well as its superiority in power over other common existing tests studied.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2018/8094146","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42204849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper deals with a batch arrival infinite-buffer single server queue. The interbatch arrival times are generally distributed and arrivals are occurring in batches of random size. The service process is correlated and its structure is presented through a continuous-time Markovian service process (C-MSP). We obtain the probability density function (p.d.f.) of actual waiting time for the first and an arbitrary customer of an arrival batch. The proposed analysis is based on the roots of the characteristic equations involved in the Laplace-Stieltjes transform (LST) of waiting times in the system for the first, an arbitrary, and the last customer of an arrival batch. The corresponding mean sojourn times in the system may be obtained using these probability density functions or the above LSTs. Numerical results for some variants of the interbatch arrival distribution (Pareto and phase-type) have been presented to show the influence of model parameters on the waiting-time distribution. Finally, a simple computational procedure (through solving a set of simultaneous linear equations) is proposed to obtain the “R” matrix of the corresponding GI/M/1-type Markov chain embedded at a prearrival epoch of a batch.
{"title":"A Note on the Waiting-Time Distribution in an Infinite-Buffer GI[X]/C-MSP/1 Queueing System","authors":"A. Banik, M. Chaudhry, James J. Kim","doi":"10.1155/2018/7462439","DOIUrl":"https://doi.org/10.1155/2018/7462439","url":null,"abstract":"This paper deals with a batch arrival infinite-buffer single server queue. The interbatch arrival times are generally distributed and arrivals are occurring in batches of random size. The service process is correlated and its structure is presented through a continuous-time Markovian service process (C-MSP). We obtain the probability density function (p.d.f.) of actual waiting time for the first and an arbitrary customer of an arrival batch. The proposed analysis is based on the roots of the characteristic equations involved in the Laplace-Stieltjes transform (LST) of waiting times in the system for the first, an arbitrary, and the last customer of an arrival batch. The corresponding mean sojourn times in the system may be obtained using these probability density functions or the above LSTs. Numerical results for some variants of the interbatch arrival distribution (Pareto and phase-type) have been presented to show the influence of model parameters on the waiting-time distribution. Finally, a simple computational procedure (through solving a set of simultaneous linear equations) is proposed to obtain the “R” matrix of the corresponding GI/M/1-type Markov chain embedded at a prearrival epoch of a batch.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2018-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2018/7462439","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46926062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a quasi-likelihood nonlinear model with random effects, which is a hybrid extension of quasi-likelihood nonlinear models and generalized linear mixed models. It includes a wide class of existing models as examples. A novel penalized quasi-likelihood estimation method is introduced. Based on the Laplace approximation and a penalized quasi-likelihood displacement, local influence of minor perturbations on the data set is investigated for the proposed model. Four concrete perturbation schemes are considered in the local influence analysis. The effectiveness of the proposed methodology is illustrated by some numerical examinations on a pharmacokinetics dataset.
{"title":"Local Influence Analysis for Quasi-Likelihood Nonlinear Models with Random Effects","authors":"Tian Xia, Jiancheng Jiang, Xuejun Jiang","doi":"10.1155/2018/4878925","DOIUrl":"https://doi.org/10.1155/2018/4878925","url":null,"abstract":"We propose a quasi-likelihood nonlinear model with random effects, which is a hybrid extension of quasi-likelihood nonlinear models and generalized linear mixed models. It includes a wide class of existing models as examples. A novel penalized quasi-likelihood estimation method is introduced. Based on the Laplace approximation and a penalized quasi-likelihood displacement, local influence of minor perturbations on the data set is investigated for the proposed model. Four concrete perturbation schemes are considered in the local influence analysis. The effectiveness of the proposed methodology is illustrated by some numerical examinations on a pharmacokinetics dataset.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2018-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2018/4878925","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42468627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mixed effects models are widely used for modelling clustered data when there are large variations between clusters, since mixed effects models allow for cluster-specific inference. In some longitudinal studies such as HIV/AIDS studies, it is common that some time-varying covariates may be left or right censored due to detection limits, may be missing at times of interest, or may be measured with errors. To address these “incomplete data“ problems, a common approach is to model the time-varying covariates based on observed covariate data and then use the fitted model to “predict” the censored or missing or mismeasured covariates. In this article, we provide a review of the common approaches for censored covariates in longitudinal and survival response models and advocate nonlinear mechanistic covariate models if such models are available.
{"title":"Mixed Effects Models with Censored Covariates, with Applications in HIV/AIDS Studies","authors":"Lang Wu, Hongbin Zhang","doi":"10.1155/2018/1581979","DOIUrl":"https://doi.org/10.1155/2018/1581979","url":null,"abstract":"Mixed effects models are widely used for modelling clustered data when there are large variations between clusters, since mixed effects models allow for cluster-specific inference. In some longitudinal studies such as HIV/AIDS studies, it is common that some time-varying covariates may be left or right censored due to detection limits, may be missing at times of interest, or may be measured with errors. To address these “incomplete data“ problems, a common approach is to model the time-varying covariates based on observed covariate data and then use the fitted model to “predict” the censored or missing or mismeasured covariates. In this article, we provide a review of the common approaches for censored covariates in longitudinal and survival response models and advocate nonlinear mechanistic covariate models if such models are available.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2018-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2018/1581979","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46400234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}