Area level linear mixed models can be generally applied to produce small area indirect estimators when only aggregated data such as sample means are available. This paper tries to fill an important research gap in small area estimation literature, the problem of constructing confidence intervals (CIs) when the estimated variance of the random effect as well as the estimated mean squared error (MSE) is negative. More precisely, the coverage, accuracy of the proposed CI is of the order O(m−3/2), where m is the number of sampled areas. The performance of the proposed method is illustrated with respect to coverage probability (CP) and average length (AL) using a simulation experiment. Simulation results demonstrate the superiority of the proposed method over existing naive CIs. In addition, the proposed CI based on the weighted estimator is comparable with the existing corrected CIs based on empirical best linear unbiased predictor (EBLUP) in the literature.
{"title":"Corrected Confidence Intervals for a Small Area Parameter through the Weighted Estimator under the Basic Area Level Model","authors":"Y. Shiferaw, J. Galpin","doi":"10.29252/JIRSS.18.1.17","DOIUrl":"https://doi.org/10.29252/JIRSS.18.1.17","url":null,"abstract":"Area level linear mixed models can be generally applied to produce small area indirect estimators when only aggregated data such as sample means are available. This paper tries to fill an important research gap in small area estimation literature, the problem of constructing confidence intervals (CIs) when the estimated variance of the random effect as well as the estimated mean squared error (MSE) is negative. More precisely, the coverage, accuracy of the proposed CI is of the order O(m−3/2), where m is the number of sampled areas. The performance of the proposed method is illustrated with respect to coverage probability (CP) and average length (AL) using a simulation experiment. Simulation results demonstrate the superiority of the proposed method over existing naive CIs. In addition, the proposed CI based on the weighted estimator is comparable with the existing corrected CIs based on empirical best linear unbiased predictor (EBLUP) in the literature.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49583610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
. This paper considers a first-order autoregressive model with skew-normal innovations from a parametric point of view. We develop an essential theory for computing the maximum likelihood estimation of model parameters via an Expectation-Maximization (EM) algorithm. Also, a Bayesian method is proposed to estimate the unknown parameters of the model. The e ffi ciency and applicability of the proposed model are assessed via a simulation study and a real-world example.
{"title":"Classical and Bayesian Estimation of the AR(1) Model with Skew-Symmetric Innovations","authors":"A. Hajrajabi, A. Fallah","doi":"10.29252/JIRSS.18.1.157","DOIUrl":"https://doi.org/10.29252/JIRSS.18.1.157","url":null,"abstract":". This paper considers a first-order autoregressive model with skew-normal innovations from a parametric point of view. We develop an essential theory for computing the maximum likelihood estimation of model parameters via an Expectation-Maximization (EM) algorithm. Also, a Bayesian method is proposed to estimate the unknown parameters of the model. The e ffi ciency and applicability of the proposed model are assessed via a simulation study and a real-world example.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46941222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When there is a high correlation between the study and auxiliary variables, the rank of the auxiliary variable also correlates with the study variable. Then, the use of the rank as an additional auxiliary variable may be helpful to increase the efficiency of the estimator of the mean or the total number of the population. In the present study, we propose two generalized families of estimators for imputing the scrambling responses by using the variance and the rank of the auxiliary variable. Expressions for the bias and the mean squared error are obtained up to the first order of approximation. A numerical study is carried out to observe the performance of estimators.
{"title":"Generalized Family of Estimators for Imputing Scrambled Responses","authors":"M. U. Sohail, S. Javid, C. Kadilar, Shakeel Ahmed","doi":"10.29252/JIRSS.17.2.1","DOIUrl":"https://doi.org/10.29252/JIRSS.17.2.1","url":null,"abstract":"When there is a high correlation between the study and auxiliary variables, the rank of the auxiliary variable also correlates with the study variable. Then, the use of the rank as an additional auxiliary variable may be helpful to increase the efficiency of the estimator of the mean or the total number of the population. In the present study, we propose two generalized families of estimators for imputing the scrambling responses by using the variance and the rank of the auxiliary variable. Expressions for the bias and the mean squared error are obtained up to the first order of approximation. A numerical study is carried out to observe the performance of estimators.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":"17 1","pages":"91-117"},"PeriodicalIF":0.4,"publicationDate":"2018-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47648276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, a new polynomial rank transmutation is proposed with the help of the idea of quadratic rank transmutation mapping (QRTM). This polynomial rank transmutation is allowed to extend the range of the transmutation parameter from [−1, 1] to [−1, k]. At this point, the generated distributions gain more flexibility than a transmuted distribution constructed by QRTM. The distribution family obtained in this transmutation is considered to be an alternative to the distribution families obtained by quadratic rank transmutation. Statistical and reliability properties of this family are examined. Considering Weibull distribution as the base distribution, the importance and the flexibility of the proposed families are illustrated by two applications.
{"title":"A New Distribution Family Constructed by Fractional Polynomial Rank Transmutation","authors":"M. Yilmaz","doi":"10.29252/JIRSS.17.2.7","DOIUrl":"https://doi.org/10.29252/JIRSS.17.2.7","url":null,"abstract":"In this study, a new polynomial rank transmutation is proposed with the help of the idea of quadratic rank transmutation mapping (QRTM). This polynomial rank transmutation is allowed to extend the range of the transmutation parameter from [−1, 1] to [−1, k]. At this point, the generated distributions gain more flexibility than a transmuted distribution constructed by QRTM. The distribution family obtained in this transmutation is considered to be an alternative to the distribution families obtained by quadratic rank transmutation. Statistical and reliability properties of this family are examined. Considering Weibull distribution as the base distribution, the importance and the flexibility of the proposed families are illustrated by two applications.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47060131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatemeh Arezoomand, M. Yarmohammadi, R. Mahmoudvand
The goal of this study is to introduce an Asymmetric Uniform-Laplace (AUL) distribution. We present a detailed theoretical description of this distribution. We try to estimate the parameters of AUL distribution using the maximum likelihood method. Since the likelihood approach results in complicated forms, we suggest a bootstrapbased approach for estimating the parameters. The proposed method is mainly based on the shape of the empirical density. We conduct a simulation study to assess the performance of the proposed procedure. We also fit the AUL distribution to real data sets: daily working time and Pontius data sets. The results show that AUL distribution is a more appropriate choice than the Skew-Normal, Skew t, Asymmetric Laplace and Uniform-Normal distributions.
{"title":"Asymmetric Uniform-Laplace distribution: Properties and Applications","authors":"Fatemeh Arezoomand, M. Yarmohammadi, R. Mahmoudvand","doi":"10.29252/JIRSS.17.2.6","DOIUrl":"https://doi.org/10.29252/JIRSS.17.2.6","url":null,"abstract":"The goal of this study is to introduce an Asymmetric Uniform-Laplace (AUL) distribution. We present a detailed theoretical description of this distribution. We try to estimate the parameters of AUL distribution using the maximum likelihood method. Since the likelihood approach results in complicated forms, we suggest a bootstrapbased approach for estimating the parameters. The proposed method is mainly based on the shape of the empirical density. We conduct a simulation study to assess the performance of the proposed procedure. We also fit the AUL distribution to real data sets: daily working time and Pontius data sets. The results show that AUL distribution is a more appropriate choice than the Skew-Normal, Skew t, Asymmetric Laplace and Uniform-Normal distributions.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46007727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
. One of the most useful tools for handling multivariate distributions of dependent variables in terms of their marginal distribution is a copula function. The copula families capture a fair amount of attention due to their applicability and (cid:13)exibil-ity in describing the non-Gaussian spatial dependent data. The particular properties of the spatial copula are rarely seen in all the known copula families. In the present paper, based on the weighted geometric mean of two Max-id copulas family, the spatial copula function is provided. Afterwards, the proposed copula along with the Bees algorithm is used to explore the spatial dependency and to interpolate the rainfall data in Iran’s Khuzestan province.
{"title":"Spatial Interpolation Using Copula for non-Gaussian Modeling of Rainfall Data","authors":"M. Omidi, M. Mohammadzadeh","doi":"10.29252/JIRSS.17.2.8","DOIUrl":"https://doi.org/10.29252/JIRSS.17.2.8","url":null,"abstract":". One of the most useful tools for handling multivariate distributions of dependent variables in terms of their marginal distribution is a copula function. The copula families capture a fair amount of attention due to their applicability and (cid:13)exibil-ity in describing the non-Gaussian spatial dependent data. The particular properties of the spatial copula are rarely seen in all the known copula families. In the present paper, based on the weighted geometric mean of two Max-id copulas family, the spatial copula function is provided. Afterwards, the proposed copula along with the Bees algorithm is used to explore the spatial dependency and to interpolate the rainfall data in Iran’s Khuzestan province.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":"1 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41372385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vahid Rezaei Tabar, D. Plewczyński, Hosna Fathipour
The parameters of a Hidden Markov Model (HMM) are transition and emission probabilities. Both can be estimated using the Baum-Welch algorithm. The process of discovering the sequence of hidden states, given the sequence of observations, is performed by the Viterbi algorithm. In both Baum-Welch and Viterbi algorithms, it is assumed that, given the states, the observations are independent from each other. In this paper, we first consider the direct dependency between consecutive observations in the HMM, and then use conditional independence relations in the context of a Bayesian network which is a probabilistic graphical model for generalizing the Baum-Welch and Viterbi algorithms. We compare the performance of the generalized algorithms with the commonly used ones in simulation studies for synthetic data. We finally apply Corresponding Author: Vahid Rezaei Tabar (vhrezaei@gmail.com) Dariusz Plewczynski (dariuszplewczynski@cent.uw.edu.pl) Hosna Fathipour (hosnafathi@yahoo.com)
{"title":"Generalized Baum-Welch and Viterbi Algorithms Based on the Direct Dependency among Observations","authors":"Vahid Rezaei Tabar, D. Plewczyński, Hosna Fathipour","doi":"10.29252/jirss.17.2.10","DOIUrl":"https://doi.org/10.29252/jirss.17.2.10","url":null,"abstract":"The parameters of a Hidden Markov Model (HMM) are transition and emission probabilities. Both can be estimated using the Baum-Welch algorithm. The process of discovering the sequence of hidden states, given the sequence of observations, is performed by the Viterbi algorithm. In both Baum-Welch and Viterbi algorithms, it is assumed that, given the states, the observations are independent from each other. In this paper, we first consider the direct dependency between consecutive observations in the HMM, and then use conditional independence relations in the context of a Bayesian network which is a probabilistic graphical model for generalizing the Baum-Welch and Viterbi algorithms. We compare the performance of the generalized algorithms with the commonly used ones in simulation studies for synthetic data. We finally apply Corresponding Author: Vahid Rezaei Tabar (vhrezaei@gmail.com) Dariusz Plewczynski (dariuszplewczynski@cent.uw.edu.pl) Hosna Fathipour (hosnafathi@yahoo.com)","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":"1 1","pages":""},"PeriodicalIF":0.4,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69861208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
. In this article we consider the stochastic restricted ridge estimation in semiparametric linear models when the covariates are measured with additive errors. The development of penalized corrected likelihood method in such model is the basis for derivation of ridge estimates. The asymptotic normality of the resulting estimates is established. Also, necessary and su (cid:14) cient conditions, for the superiority of the proposed estimator over its counterpart, for selecting the ridge parameter k are obtained. A Monte Carlo simulation study is also performed to illustrate the finite sample performance of the proposed procedures. Finally theoretical results are applied to Egyptian pottery industry data set.
{"title":"Ridge stochastic restricted estimators in semiparametric linear measurement error models","authors":"Hadi Emami","doi":"10.29252/JIRSS.17.2.9","DOIUrl":"https://doi.org/10.29252/JIRSS.17.2.9","url":null,"abstract":". In this article we consider the stochastic restricted ridge estimation in semiparametric linear models when the covariates are measured with additive errors. The development of penalized corrected likelihood method in such model is the basis for derivation of ridge estimates. The asymptotic normality of the resulting estimates is established. Also, necessary and su (cid:14) cient conditions, for the superiority of the proposed estimator over its counterpart, for selecting the ridge parameter k are obtained. A Monte Carlo simulation study is also performed to illustrate the finite sample performance of the proposed procedures. Finally theoretical results are applied to Egyptian pottery industry data set.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41534680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siavash Pirzadeh Nahooji, R. Farnoosh, N. Nematollahi
In this paper, the nonlinear regression models when the model errors follow a slash skew-elliptical distribution, are considered. In the special case of nonlinear regression models under slash skew-t distribution, we present some distributional properties, and to estimate their parameters, we use an EM-type algorithm. Also, to find the estimation errors, we derive the observed information matrix analytically. To describe the influence of the observations on the ML estimates, we use a sensitivity analysis. Finally, we conduct some simulation studies and a real data analysis to show the performance of the proposed model.
{"title":"Nonlinear Regression Models Based on Slash Skew-elliptical Errors","authors":"Siavash Pirzadeh Nahooji, R. Farnoosh, N. Nematollahi","doi":"10.29252/JIRSS.17.2.3","DOIUrl":"https://doi.org/10.29252/JIRSS.17.2.3","url":null,"abstract":"In this paper, the nonlinear regression models when the model errors follow a slash skew-elliptical distribution, are considered. In the special case of nonlinear regression models under slash skew-t distribution, we present some distributional properties, and to estimate their parameters, we use an EM-type algorithm. Also, to find the estimation errors, we derive the observed information matrix analytically. To describe the influence of the observations on the ML estimates, we use a sensitivity analysis. Finally, we conduct some simulation studies and a real data analysis to show the performance of the proposed model.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44331177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
. Pricing weather derivatives is becoming increasingly useful, especially in developing economies. We describe a statistical model based approach for pricing weather derivatives by modeling and forecasting daily average temperature data which exhibits long-range dependence. We pre-process the temperature data by filtering for seasonality and volatility and fit autoregressive fractionally integrated moving average (ARFIMA) models, employing the preconditioned conjugate gradient (PCG) algorithm for fast computation of the likelihood function. We illustrate our approach using daily temperature data from 1970 to 2008 for cities traded on the Chicago Mercantile Exchange (CME), which we employ for pricing degree days futures contracts. We compare the statistical approach with traditional burn analysis using a simple additive risk loading principle for pricing, where the risk premium is estimated by the method of least squares using data on observed prices and the corresponding estimate of prices from the best model we fit to the temperature data.
{"title":"Stochastic Models for Pricing Weather Derivatives using Constant Risk Premium","authors":"Jeffrey Pai, N. Ravishanker","doi":"10.29252/JIRSS.17.2.4","DOIUrl":"https://doi.org/10.29252/JIRSS.17.2.4","url":null,"abstract":". Pricing weather derivatives is becoming increasingly useful, especially in developing economies. We describe a statistical model based approach for pricing weather derivatives by modeling and forecasting daily average temperature data which exhibits long-range dependence. We pre-process the temperature data by filtering for seasonality and volatility and fit autoregressive fractionally integrated moving average (ARFIMA) models, employing the preconditioned conjugate gradient (PCG) algorithm for fast computation of the likelihood function. We illustrate our approach using daily temperature data from 1970 to 2008 for cities traded on the Chicago Mercantile Exchange (CME), which we employ for pricing degree days futures contracts. We compare the statistical approach with traditional burn analysis using a simple additive risk loading principle for pricing, where the risk premium is estimated by the method of least squares using data on observed prices and the corresponding estimate of prices from the best model we fit to the temperature data.","PeriodicalId":42965,"journal":{"name":"JIRSS-Journal of the Iranian Statistical Society","volume":" ","pages":""},"PeriodicalIF":0.4,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44963865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}