. It is well-known that the skew-normal distribution can provide an alternative model to the normal distribution for analyzing asymmetric data. The aim of this paper is to propose two goodness-of-fit tests for assessing whether a sample comes from a multivariate skew-normal (MSN) distribution. We address the problem of multivariate skew-normality goodness-of-fit based on the empirical Laplace transform and empirical characteristic function, respectively, using the canonical form of the MSN distribution. Applications with Monte Carlo simulations and real-life data examples are reported to illustrate the usefulness of the new tests.
{"title":"On the Canonical-Based Goodness-of-fit Tests for Multivariate Skew-Normality","authors":"Saeed Darijani, H. Zakerzadeh, H. Torabi","doi":"10.52547/JIRSS.19.2.119","DOIUrl":"https://doi.org/10.52547/JIRSS.19.2.119","url":null,"abstract":". It is well-known that the skew-normal distribution can provide an alternative model to the normal distribution for analyzing asymmetric data. The aim of this paper is to propose two goodness-of-fit tests for assessing whether a sample comes from a multivariate skew-normal (MSN) distribution. We address the problem of multivariate skew-normality goodness-of-fit based on the empirical Laplace transform and empirical characteristic function, respectively, using the canonical form of the MSN distribution. Applications with Monte Carlo simulations and real-life data examples are reported to illustrate the usefulness of the new tests.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48547320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
. There are several methods to make inferences about the parameters of the sampling distribution when we encounter the missing values and the censored data. In this paper, through the order statistics and the projection theorem, a novel algorithm is proposed to impute the missing values in the multivariate case. Then, the performance of this method is investigated through the simulation studies. In an attempt to validate the proposed method and compare it with some other methods a real data is used.
{"title":"A New Algorithm to Impute the Missing Values in the Multivariate Case","authors":"I. Almasi, Mohsen Salehi, M. Moradi","doi":"10.52547/JIRSS.19.2.133","DOIUrl":"https://doi.org/10.52547/JIRSS.19.2.133","url":null,"abstract":". There are several methods to make inferences about the parameters of the sampling distribution when we encounter the missing values and the censored data. In this paper, through the order statistics and the projection theorem, a novel algorithm is proposed to impute the missing values in the multivariate case. Then, the performance of this method is investigated through the simulation studies. In an attempt to validate the proposed method and compare it with some other methods a real data is used.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48263139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Liu estimator has consistently been demonstrated to be an attractive shrinkage method for reducing the effects of multicollinearity. The Poisson regression model is a well-known model in applications when the response variable consists of count data. However, it is known that multicollinearity negatively affects the variance of the maximum likelihood estimator (MLE) of the Poisson regression coefficients. To address this problem, a Poisson Liu estimator has been proposed by numerous researchers. In this paper, a Jackknifed Liu-type Poisson estimator (JPLTE) is proposed and derived. The idea behind the JPLTE is to decrease the shrinkage parameter and, therefore, improve the resultant estimator by reducing the amount of bias. Our Monte Carlo simulation results suggest that the JPLTE estimator can bring significant improvements relative to other existing estimators. In addition, the results of a real application demonstrate that the JPLTE estimator outperforms both the Poisson Liu estimator and the maximum likelihood estimator in terms of predictive performance.
{"title":"Jackknifed Liu-type Estimator in Poisson Regression Model","authors":"Ahmed Alkhateeb, Z. Algamal","doi":"10.29252/jirss.19.1.21","DOIUrl":"https://doi.org/10.29252/jirss.19.1.21","url":null,"abstract":"The Liu estimator has consistently been demonstrated to be an attractive shrinkage method for reducing the effects of multicollinearity. The Poisson regression model is a well-known model in applications when the response variable consists of count data. However, it is known that multicollinearity negatively affects the variance of the maximum likelihood estimator (MLE) of the Poisson regression coefficients. To address this problem, a Poisson Liu estimator has been proposed by numerous researchers. In this paper, a Jackknifed Liu-type Poisson estimator (JPLTE) is proposed and derived. The idea behind the JPLTE is to decrease the shrinkage parameter and, therefore, improve the resultant estimator by reducing the amount of bias. Our Monte Carlo simulation results suggest that the JPLTE estimator can bring significant improvements relative to other existing estimators. In addition, the results of a real application demonstrate that the JPLTE estimator outperforms both the Poisson Liu estimator and the maximum likelihood estimator in terms of predictive performance.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":"19 1","pages":"21-37"},"PeriodicalIF":0.4,"publicationDate":"2020-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41396110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although the random sum distribution has been well-studied in probability theory, inference for the mean of such distribution is very limited in the literature. In this paper, two approaches are proposed to obtain inference for the mean of the Poisson-Exponential distribution. Both proposed approaches require the log-likelihood function of the Poisson-Exponential distribution, but the exact form of the log-likelihood function is not available. An approximate form of the log-likelihood function is then derived by the saddlepoint method. Inference for the mean of the Poisson-Exponential distribution can either be obtained from the modified signed likelihood root statistic or from the Bartlett corrected likelihood ratio statistic. The explicit form of the modified signed likelihood root statistic is derived in this paper, and a systematic method to numerically approximate the Bartlett correction factor, hence the Bartlett corrected likelihood ratio statistic is proposed. Simulation studies show that both methods are extremely accurate even when the sample size is small.
{"title":"Accurate Inference for the Mean of the Poisson-Exponential Distribution","authors":"Wei Lin, Xiang Li, A. Wong","doi":"10.29252/jirss.19.1.1","DOIUrl":"https://doi.org/10.29252/jirss.19.1.1","url":null,"abstract":"Although the random sum distribution has been well-studied in probability theory, inference for the mean of such distribution is very limited in the literature. In this paper, two approaches are proposed to obtain inference for the mean of the Poisson-Exponential distribution. Both proposed approaches require the log-likelihood function of the Poisson-Exponential distribution, but the exact form of the log-likelihood function is not available. An approximate form of the log-likelihood function is then derived by the saddlepoint method. Inference for the mean of the Poisson-Exponential distribution can either be obtained from the modified signed likelihood root statistic or from the Bartlett corrected likelihood ratio statistic. The explicit form of the modified signed likelihood root statistic is derived in this paper, and a systematic method to numerically approximate the Bartlett correction factor, hence the Bartlett corrected likelihood ratio statistic is proposed. Simulation studies show that both methods are extremely accurate even when the sample size is small.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":"19 1","pages":"1-19"},"PeriodicalIF":0.4,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42675815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent decades, studying order statistics arising from independent and not necessary identically distributed (INID) random variables has been a main concern for researchers. A cumulative distribution function (CDF) of these random variables (Fi:n) is a complex manipulating, long time consuming and a software-intensive tool that takes more and more times. Therefore, obtaining approximations and boundaries for Fi:n and other theoretical properties of these variables, such as moments, quantiles, characteristic function, and some related probabilities, has always been a main chal- lenge. Recently, Bayramoglu (2018) provided a new definition of ordering, by point to point ordering Fi’s (D-order) and showed that these new functions are CDFs and also, the corresponding random variables are independent. Thus, he suggested new CDFs (F[i]) that can be used as an alternative of Fi:n. Now with using, just F[1], and F[n], we have found the upper and lower bounds of Fi:n. Furthermore, specially a precisely approximation for F1:n and Fn:n (F1;n:n). Also in many cases approximations for other CDFs are derived. In addition, we compare approximated function with those oered by Bayramoglu and it is shown that our results of these proposed functions are far better than D-order functions.
{"title":"Bounds for CDFs of Order Statistics Arising from INID Random Variables","authors":"J. Kazempoor, A. Habibirad, Kheirolah Okhli","doi":"10.29252/jirss.19.1.39","DOIUrl":"https://doi.org/10.29252/jirss.19.1.39","url":null,"abstract":"In recent decades, studying order statistics arising from independent and not necessary identically distributed (INID) random variables has been a main concern for researchers. A cumulative distribution function (CDF) of these random variables (Fi:n) is a complex manipulating, long time consuming and a software-intensive tool that takes more and more times. Therefore, obtaining approximations and boundaries for Fi:n and other theoretical properties of these variables, such as moments, quantiles, characteristic function, and some related probabilities, has always been a main chal- lenge. Recently, Bayramoglu (2018) provided a new definition of ordering, by point to point ordering Fi’s (D-order) and showed that these new functions are CDFs and also, the corresponding random variables are independent. Thus, he suggested new CDFs (F[i]) that can be used as an alternative of Fi:n. Now with using, just F[1], and F[n], we have found the upper and lower bounds of Fi:n. Furthermore, specially a precisely approximation for F1:n and Fn:n (F1;n:n). Also in many cases approximations for other CDFs are derived. In addition, we compare approximated function with those oered by Bayramoglu and it is shown that our results of these proposed functions are far better than D-order functions.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":"19 1","pages":"39-57"},"PeriodicalIF":0.4,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48325165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, two-stage and purely sequential estimation procedures are considered to construct fixed-width confidence intervals for the reliability parameter under the stress-strength model when the stress and strength are independent exponential random variables with different scale parameters. The exact distribution of the stopping rule under the purely sequential procedure is approximated using the law of large numbers and Monte Carlo integration. For the two-stage sequential procedure, explicit formulas for the distribution of the total sample size, the expected value and mean squared error of the maximum likelihood estimator of the reliability parameter under the stress-strength model are provided. Moreover, it is shown that both proposed sequential procedures are finite, and in exceptional cases, the exact distribution of stopping times is degenerate distribution at the initial sample size. The performances of the proposed methodologies are investigated with the help of simulations. Finally using real data, the procedures are clearly illustrated.
{"title":"Sequential-Based Approach for Estimating the Stress-Strength Reliability Parameter for Exponential Distribution","authors":"Ashkan Khalifeh, E. Mahmoudi, A. Dolati","doi":"10.29252/jirss.19.1.85","DOIUrl":"https://doi.org/10.29252/jirss.19.1.85","url":null,"abstract":"In this paper, two-stage and purely sequential estimation procedures are considered to construct fixed-width confidence intervals for the reliability parameter under the stress-strength model when the stress and strength are independent exponential random variables with different scale parameters. The exact distribution of the stopping rule under the purely sequential procedure is approximated using the law of large numbers and Monte Carlo integration. For the two-stage sequential procedure, explicit formulas for the distribution of the total sample size, the expected value and mean squared error of the maximum likelihood estimator of the reliability parameter under the stress-strength model are provided. Moreover, it is shown that both proposed sequential procedures are finite, and in exceptional cases, the exact distribution of stopping times is degenerate distribution at the initial sample size. The performances of the proposed methodologies are investigated with the help of simulations. Finally using real data, the procedures are clearly illustrated.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":"49 12","pages":"85-120"},"PeriodicalIF":0.4,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41295761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For multiple testing problems, Benjamini and Hochberg (1995) proposed the false discovery rate (FDR) as an alternative to the family-wise error rate (FWER). Since then, researchers have provided many proofs to control the FDR under different assumptions. Storey et al. (2004) showed that the rejection threshold of a BH step-up procedure is a stopping time with respect to the reverse filtration generated by the pvalues and proposed a new proof based on the martingale theory. Following this work, martingale methods have been widely used to establish FDR control in various settings, but have been primarily applied to reverse filtration only. However, forward filtration can be more amenable for generalized and adaptive FDR controlling procedures. In this paper, we present a new proof, based on forward filtration, for step-down FDR controlling procedures that start from small p-values and update the rejection regions as larger p-values are observed.
{"title":"A New Proof of FDR Control Based on Forward Filtration","authors":"A. Ehyaei, Kasra Alishahi, A. Shojaei","doi":"10.29252/jirss.19.1.59","DOIUrl":"https://doi.org/10.29252/jirss.19.1.59","url":null,"abstract":"For multiple testing problems, Benjamini and Hochberg (1995) proposed the false discovery rate (FDR) as an alternative to the family-wise error rate (FWER). Since then, researchers have provided many proofs to control the FDR under different assumptions. Storey et al. (2004) showed that the rejection threshold of a BH step-up procedure is a stopping time with respect to the reverse filtration generated by the pvalues and proposed a new proof based on the martingale theory. Following this work, martingale methods have been widely used to establish FDR control in various settings, but have been primarily applied to reverse filtration only. However, forward filtration can be more amenable for generalized and adaptive FDR controlling procedures. In this paper, we present a new proof, based on forward filtration, for step-down FDR controlling procedures that start from small p-values and update the rejection regions as larger p-values are observed.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":"19 1","pages":"59-68"},"PeriodicalIF":0.4,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41810943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study an (n-k+1)-out-of-n system by adopting their components to be statistically independent though nonidentically distributed. By assuming that at least m components at a fixed time have failed while the system is still working, we obtain the mixture representation of survival function for a quantity called the conditional inactivity time of failed components in the system. Moreover, this quantity for (n-k+1)-out-of-n system, in one sample with respect to k and m and in two samples, are stochastically compared.
{"title":"On Conditional Inactivity Time of Failed Components in an (n-k+1)-out-of-n System with Nonidentical Independent Components","authors":"F. Sajadi, M. H. Poursaeed, S. Goli","doi":"10.29252/jirss.19.1.69","DOIUrl":"https://doi.org/10.29252/jirss.19.1.69","url":null,"abstract":"In this paper, we study an (n-k+1)-out-of-n system by adopting their components to be statistically independent though nonidentically distributed. By assuming that at least m components at a fixed time have failed while the system is still working, we obtain the mixture representation of survival function for a quantity called the conditional inactivity time of failed components in the system. Moreover, this quantity for (n-k+1)-out-of-n system, in one sample with respect to k and m and in two samples, are stochastically compared.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":"19 1","pages":"69-83"},"PeriodicalIF":0.4,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42602021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
. Di Crescenzo and Longobardi (2002) has been proposed a measure of uncertainty related to past life namely past entropy. The present paper addresses the question of extending this concept to bivariate set-up and study some properties of the proposed measure. It is shown that the proposed measure uniquely determines the distribution function. Characterizations for some bivariate lifetime models are obtained using the proposed measure. Further, we define new classes of life distributions based on this measure and properties of the new classes are also discussed. We also proposed a non-parametric kernel estimator for the proposed measure and illustrated performance of the estimator using a numerical data. 62G30; 62E10, 62B10.
Di Crescenzo和Longobardi(2002)提出了一种与过去生活有关的不确定性度量,即过去熵。本文讨论了将这一概念推广到二元设置的问题,并研究了所提出的测度的一些性质。结果表明,所提出的测度唯一地确定了分布函数。使用所提出的测度获得了一些双变量寿命模型的特征。此外,我们基于这一测度定义了新的生命分布类别,并讨论了新类别的性质。我们还为所提出的测度提出了一个非参数核估计器,并使用数值数据说明了该估计器的性能。62G30;62E10、62B10。
{"title":"Bivariate Extension of Past Entropy","authors":"G. Rajesh, E. I. Abdul-Sathar, K. V. Reshmi","doi":"10.29252/jirss.19.1.185","DOIUrl":"https://doi.org/10.29252/jirss.19.1.185","url":null,"abstract":". Di Crescenzo and Longobardi (2002) has been proposed a measure of uncertainty related to past life namely past entropy. The present paper addresses the question of extending this concept to bivariate set-up and study some properties of the proposed measure. It is shown that the proposed measure uniquely determines the distribution function. Characterizations for some bivariate lifetime models are obtained using the proposed measure. Further, we define new classes of life distributions based on this measure and properties of the new classes are also discussed. We also proposed a non-parametric kernel estimator for the proposed measure and illustrated performance of the estimator using a numerical data. 62G30; 62E10, 62B10.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49524098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this paper is to introduce a new estimation method for estimating the Archimedean copula dependence parameter in the non-parametric setting. The estimation of the dependence parameter has been selected as the value that minimizes the Cramér-von-Mises distance which measures the distance between Empirical Bernstein Kendall distribution function and true Kendall distribution function. A Monte Carlo study is performed to measure the performance of the new estimator and compared to conventional estimation methods. In terms of estimation performance, simulation results show that the proposed Minumum Cramér-von-Mises estimation method has a good performance for low dependence and a small sample size when compared with the other estimation methods. The new minimum distance estimation of the dependence parameter is applied to model the dependence of two real data sets.
{"title":"Parameter Estimation of Some Archimedean Copulas Based on Minimum Cramér-von-Mises Distance","authors":"Selim Orhun Susam","doi":"10.29252/jirss.19.1.163","DOIUrl":"https://doi.org/10.29252/jirss.19.1.163","url":null,"abstract":"The purpose of this paper is to introduce a new estimation method for estimating the Archimedean copula dependence parameter in the non-parametric setting. The estimation of the dependence parameter has been selected as the value that minimizes the Cramér-von-Mises distance which measures the distance between Empirical Bernstein Kendall distribution function and true Kendall distribution function. A Monte Carlo study is performed to measure the performance of the new estimator and compared to conventional estimation methods. In terms of estimation performance, simulation results show that the proposed Minumum Cramér-von-Mises estimation method has a good performance for low dependence and a small sample size when compared with the other estimation methods. The new minimum distance estimation of the dependence parameter is applied to model the dependence of two real data sets.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":"19 1","pages":"163-183"},"PeriodicalIF":0.4,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44184382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}