Pub Date : 2016-03-01DOI: 10.1016/j.stamet.2015.10.002
Athanasios C. Rakitzis , Philippe Castagliola , Petros E. Maravelakis
In this work, we propose and study a two-parameter modification of the ordinary Poisson distribution that is suitable for the modeling of non-typical count data. This model can be viewed as an extension of the zero-inflated Poisson distribution. We derive the proposed model as a special case of a general one and focus our study on it. The theoretical properties for each model are given, while estimation methods for the two-parameter model are discussed as well. Three practical examples illustrate its usefulness. The results show that the proposed model is very flexible in the modeling of various types of count data.
{"title":"A two-parameter general inflated Poisson distribution: Properties and applications","authors":"Athanasios C. Rakitzis , Philippe Castagliola , Petros E. Maravelakis","doi":"10.1016/j.stamet.2015.10.002","DOIUrl":"10.1016/j.stamet.2015.10.002","url":null,"abstract":"<div><p>In this work, we propose and study a two-parameter modification of the ordinary Poisson distribution that is suitable for the modeling of non-typical count data. This model can be viewed as an extension of the zero-inflated Poisson distribution. We derive the proposed model as a special case of a general one and focus our study on it. The theoretical properties for each model are given, while estimation methods for the two-parameter model are discussed as well. Three practical examples illustrate its usefulness. The results show that the proposed model is very flexible in the modeling of various types of count data.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"29 ","pages":"Pages 32-50"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2015.10.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55093149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01DOI: 10.1016/j.stamet.2015.07.004
Kaushik Mahata , Amit Mitra , Sharmishtha Mitra
In this paper, we consider the problem of robust M-estimation of parameters of nonlinear signal processing models. We investigate the conditions under which estimators are strongly consistent for convex and non-convex penalty functions and a wide class of noise scenarios, contaminating the actual transmitted signal. It is shown that the M-estimators of a general nonlinear signal model are asymptotically consistent with probability one under different sets of sufficient conditions on loss function and noise distribution. Simulations are performed for nonlinear superimposed sinusoidal model to observe the small sample performance of the M-estimators for various heavy tailed error distributions, outlier contamination levels and sample sizes.
{"title":"Consistency of M-estimators of nonlinear signal processing models","authors":"Kaushik Mahata , Amit Mitra , Sharmishtha Mitra","doi":"10.1016/j.stamet.2015.07.004","DOIUrl":"10.1016/j.stamet.2015.07.004","url":null,"abstract":"<div><p>In this paper, we consider the problem of robust M-estimation of parameters of nonlinear signal processing models. We investigate the conditions under which estimators are strongly consistent for convex and non-convex penalty functions and a wide class of noise scenarios, contaminating the actual transmitted signal. It is shown that the M-estimators of a general nonlinear signal model are asymptotically consistent with probability one under different sets of sufficient conditions on loss function and noise distribution. Simulations are performed for nonlinear superimposed sinusoidal model to observe the small sample performance of the M-estimators for various heavy tailed error distributions, outlier contamination levels and sample sizes.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"28 ","pages":"Pages 18-36"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2015.07.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55093100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01DOI: 10.1016/j.stamet.2015.07.003
Balgobin Nandram, Jiani Yin
It is well known that the Dirichlet process (DP) model and Dirichlet process mixture (DPM) model are sensitive to the specifications of the baseline distribution. Given a sample from a finite population, we perform Bayesian predictive inference about a finite population quantity (e.g., mean) using a DP model. Generally, in many applications a normal distribution is used for the baseline distribution. Therefore, our main objective is empirical and we show the extent of the sensitivity of inference about the finite population mean with respect to six distributions (normal, lognormal, gamma, inverse Gaussian, a two-component normal mixture and a skewed normal). We have compared the DP model using these baselines with the Polya posterior (fully nonparametric) and the Bayesian bootstrap (sampling with a Haldane prior). We used two examples, one on income data and the other on body mass index data, to compare the performance of these three procedures. These examples show some differences among the six baseline distributions, the Polya posterior and the Bayesian bootstrap, indicating that the normal baseline model cannot be used automatically. Therefore, we consider a simulation study to assess this issue further, and we show how to solve this problem using a leave-one-out kernel baseline. Because the leave-one-out kernel baseline cannot be easily applied to the DPM, we show theoretically how one can solve the sensitivity problem for the DPM as well.
{"title":"Bayesian predictive inference under a Dirichlet process with sensitivity to the normal baseline","authors":"Balgobin Nandram, Jiani Yin","doi":"10.1016/j.stamet.2015.07.003","DOIUrl":"10.1016/j.stamet.2015.07.003","url":null,"abstract":"<div><p>It is well known that the Dirichlet process (DP) model and Dirichlet process mixture (DPM) model are sensitive to the specifications of the baseline distribution. Given a sample from a finite population, we perform Bayesian predictive inference about a finite population quantity (e.g., mean) using a DP model. Generally, in many applications a normal distribution is used for the baseline distribution. Therefore, our main objective is empirical and we show the extent of the sensitivity of inference about the finite population mean with respect to six distributions (normal, lognormal, gamma, inverse Gaussian, a two-component normal mixture and a skewed normal). We have compared the DP model using these baselines with the Polya posterior (fully nonparametric) and the Bayesian bootstrap (sampling with a Haldane prior). We used two examples, one on income data and the other on body mass index data, to compare the performance of these three procedures. These examples show some differences among the six baseline distributions, the Polya posterior and the Bayesian bootstrap, indicating that the normal baseline model cannot be used automatically. Therefore, we consider a simulation study to assess this issue further, and we show how to solve this problem using a leave-one-out kernel baseline. Because the leave-one-out kernel baseline cannot be easily applied to the DPM, we show theoretically how one can solve the sensitivity problem for the DPM as well.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"28 ","pages":"Pages 1-17"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2015.07.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55093089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1016/j.stamet.2015.06.002
Panayiotis Bobotas, Maria Kateri
A step-stress accelerated life testing model is constructed that deals with type-I censored experiments for which a continuous monitoring of the tested items is infeasible and only their inspection at particular time points is possible, producing thus grouped data. A general scale family of distributions is considered for the underlying lifetimes, which allows for flexible modeling by permitting different lifetime distributions for different stress levels. The maximum likelihood estimators of its parameters and their density functions are derived explicitly only when the inspection points coincide with the points of stress-level change. In case of additional inspection points, the estimates are obtained numerically. Asymptotic, exact (whenever possible) and bootstrap confidence intervals (CIs) are considered. For the bootstrap CIs a smoothing-modification is introduced, accounting for the categorical nature of the data.
{"title":"The step-stress tampered failure rate model under interval monitoring","authors":"Panayiotis Bobotas, Maria Kateri","doi":"10.1016/j.stamet.2015.06.002","DOIUrl":"10.1016/j.stamet.2015.06.002","url":null,"abstract":"<div><p>A step-stress accelerated life testing model is constructed that deals with type-I censored experiments for which a continuous monitoring of the tested items is infeasible and only their inspection at particular time points<span><span> is possible, producing thus grouped data. A general scale family of distributions is considered for the underlying lifetimes, which allows for flexible modeling by permitting different lifetime distributions for different stress levels. The maximum likelihood estimators of its parameters and their density functions are derived explicitly only when the inspection points coincide with the points of stress-level change. In case of additional inspection points, the estimates are obtained numerically. Asymptotic, exact (whenever possible) and </span>bootstrap confidence intervals (CIs) are considered. For the bootstrap CIs a smoothing-modification is introduced, accounting for the categorical nature of the data.</span></p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"27 ","pages":"Pages 100-122"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2015.06.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55093060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1016/j.stamet.2015.05.005
Ekele Alih, Hong Choon Ong
A simple robust -regression estimator is presented. The proposed method blends a minimum covariance determinant concentration algorithm with a controlled ordinary least squares regression phase. A hierarchical cluster analysis then partitions the data into main cluster of “half set” and a minor cluster of one or more groups. An initial least squares regression estimate arises from the main cluster of “half set”. Thereafter, a group-additive difference in fit statistic is used to activate the minor cluster and a controlled re-weighted least squares regression yields a robust efficient estimator with high breakdown value. Simulation experiment shows the advantage of the proposed method over the popular robust regression techniques in terms of robustness of coefficients, and blending outlier diagnostic procedure with parameter estimation.
{"title":"Cluster-based L2 re-weighted regression","authors":"Ekele Alih, Hong Choon Ong","doi":"10.1016/j.stamet.2015.05.005","DOIUrl":"10.1016/j.stamet.2015.05.005","url":null,"abstract":"<div><p>A simple robust <span><math><mi>L</mi><mn>2</mn></math></span>-regression estimator is presented.<!--> <!-->The proposed method blends a minimum covariance determinant <span><math><mrow><mo>(</mo><mi>M</mi><mi>C</mi><mi>D</mi><mo>)</mo></mrow></math></span> concentration algorithm with a controlled ordinary least squares regression phase.<!--> <!-->A hierarchical cluster analysis then partitions the data into main cluster of “half set”<!--> <!-->and a minor cluster of one or more groups.<!--> <!-->An initial least squares regression estimate arises from the main cluster of “half set”.<!--> <!-->Thereafter, a group-additive difference in fit statistic is used to activate the minor cluster and a controlled re-weighted least squares regression yields a robust efficient estimator with high breakdown value.<!--> <!-->Simulation experiment shows the advantage of the proposed method over the popular robust regression techniques in terms of robustness of coefficients, and blending outlier diagnostic procedure with parameter estimation.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"27 ","pages":"Pages 51-81"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2015.05.005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55093027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1016/j.stamet.2015.07.002
Wenhao Gui , Meiping Xu
In this paper, we develop a double acceptance sampling plan for half exponential power distribution when the lifetime experiment is truncated at a prefixed time. The zero and one failure schemes are considered. We obtain the minimum sample sizes of the first and second samples necessary to ensure the specified mean life at the given consumer’s confidence level. The operating characteristic values and the minimum ratios of the mean life to the specified life are also analyzed. Numerical example is provided to illustrate the double acceptance sampling plan.
{"title":"Double acceptance sampling plan based on truncated life tests for half exponential power distribution","authors":"Wenhao Gui , Meiping Xu","doi":"10.1016/j.stamet.2015.07.002","DOIUrl":"10.1016/j.stamet.2015.07.002","url":null,"abstract":"<div><p>In this paper, we develop a double acceptance sampling plan for half exponential power distribution when the lifetime experiment is truncated at a prefixed time. The zero and one failure schemes are considered. We obtain the minimum sample sizes of the first and second samples necessary to ensure the specified mean life at the given consumer’s confidence level. The operating characteristic values and the minimum ratios of the mean life to the specified life are also analyzed. Numerical example is provided to illustrate the double acceptance sampling plan.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"27 ","pages":"Pages 123-131"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2015.07.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55093076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1016/j.stamet.2015.05.002
Esmaeil Shirazi , Hassan Doosti
In this paper, we employ wavelet method to propose a multivariate density estimator based on a biased sample. We investigate the asymptotic rate of convergence of the proposed estimator over a large class of densities in the Besov space, . Moreover, we prove the consistency of our estimator when the expectation of weight function is unknown. This paper is an extension of results in Ramirez and Vidakovic (2010) and Chesneau et al. (2012) to the multivariate case.
本文利用小波变换方法提出了一种基于有偏样本的多元密度估计方法。我们研究了Besov空间(Bpqs)中一大类密度上所提估计量的渐近收敛率。此外,在权函数期望未知的情况下,证明了估计量的相合性。本文将Ramirez and Vidakovic(2010)和Chesneau et al.(2012)的结果推广到多元情况。
{"title":"Multivariate wavelet-based density estimation with size-biased data","authors":"Esmaeil Shirazi , Hassan Doosti","doi":"10.1016/j.stamet.2015.05.002","DOIUrl":"10.1016/j.stamet.2015.05.002","url":null,"abstract":"<div><p>In this paper, we employ wavelet method to propose a multivariate density estimator based on a biased sample. We investigate the asymptotic rate of convergence of the proposed estimator over a large class of densities in the Besov space, <span><math><msubsup><mrow><mi>B</mi></mrow><mrow><mi>p</mi><mi>q</mi></mrow><mrow><mi>s</mi></mrow></msubsup></math></span>. Moreover, we prove the consistency of our estimator when the expectation of <em>weight function</em> is unknown. This paper is an extension of results in Ramirez and Vidakovic (2010) and Chesneau et al. (2012) to the multivariate case.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"27 ","pages":"Pages 12-19"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2015.05.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1016/j.stamet.2015.05.003
N. Unnikrishnan Nair, P.G. Sankaran
In the present paper, we study the properties of the multivariate discrete scalar hazard rate. Its continuous analogue introduced in the early seventies did not attract much attention because it could not be used to identify the corresponding life distribution. We find the conditions under which an -variate discrete scalar hazard rate can determine the distribution uniquely. Several other properties of this hazard rate which can be employed in modelling lifetime data are discussed. Some ageing classes based on the scalar hazard function are suggested.
{"title":"Multivariate discrete scalar hazard rate","authors":"N. Unnikrishnan Nair, P.G. Sankaran","doi":"10.1016/j.stamet.2015.05.003","DOIUrl":"10.1016/j.stamet.2015.05.003","url":null,"abstract":"<div><p>In the present paper, we study the properties of the multivariate discrete scalar hazard rate. Its continuous analogue introduced in the early seventies did not attract much attention because it could not be used to identify the corresponding life distribution. We find the conditions under which an <span><math><mi>n</mi></math></span>-variate discrete scalar hazard rate can determine the distribution uniquely. Several other properties of this hazard rate which can be employed in modelling lifetime data are discussed. Some ageing classes based on the scalar hazard function are suggested.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"27 ","pages":"Pages 39-50"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2015.05.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55093006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1016/j.stamet.2015.05.004
Yujie Gai , Jun Zhang , Gaorong Li , Xinchao Luo
We consider statistical inference for partial linear additive models (PLAMs) when the linear covariates are measured with errors and distorted by unknown functions of commonly observable confounding variables. A semiparametric profile least squares estimation procedure is proposed to estimate unknown parameter under unrestricted and restricted conditions. Asymptotic properties for the estimators are established. To test a hypothesis on the parametric components, a test statistic based on the difference between the residual sums of squares under the null and alternative hypotheses is proposed, and we further show that its limiting distribution is a weighted sum of independent standard chi-squared distributions. A bootstrap procedure is further proposed to calculate critical values. Simulation studies are conducted to demonstrate the performance of the proposed procedure and a real example is analyzed for an illustration.
{"title":"Statistical inference on partial linear additive models with distortion measurement errors","authors":"Yujie Gai , Jun Zhang , Gaorong Li , Xinchao Luo","doi":"10.1016/j.stamet.2015.05.004","DOIUrl":"10.1016/j.stamet.2015.05.004","url":null,"abstract":"<div><p>We consider statistical inference for partial linear additive models (PLAMs) when the linear covariates are measured with errors and distorted by unknown functions of commonly observable confounding variables. A semiparametric profile least squares estimation procedure is proposed to estimate unknown parameter under unrestricted and restricted conditions. Asymptotic properties for the estimators are established. To test a hypothesis on the parametric components, a test statistic based on the difference between the residual sums of squares under the null and alternative hypotheses is proposed, and we further show that its limiting distribution is a weighted sum of independent standard chi-squared distributions. A bootstrap procedure is further proposed to calculate critical values. Simulation studies are conducted to demonstrate the performance of the proposed procedure and a real example is analyzed for an illustration.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"27 ","pages":"Pages 20-38"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2015.05.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55093019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1016/j.stamet.2015.05.001
S.K. Ghoreishi , M.R. Meshkani
In this paper, we develop a methodology for the dynamic Bayesian analysis of generalized odds ratios in contingency tables. It is a standard practice to assume a normal distribution for the random effects in the dynamic system equations. Nevertheless, the normality assumption may be unrealistic in some applications and hence the validity of inferences can be dubious. Therefore, we assume a multivariate skew-normal distribution for the error terms in the system equation at each step. Moreover, we introduce a moving average approach to elicit the hyperparameters. Both simulated data and real data are analyzed to illustrate the application of this methodology.
{"title":"Dynamic Bayesian analysis of generalized odds ratios assuming multivariate skew-normal distribution for the error terms in the system equation","authors":"S.K. Ghoreishi , M.R. Meshkani","doi":"10.1016/j.stamet.2015.05.001","DOIUrl":"10.1016/j.stamet.2015.05.001","url":null,"abstract":"<div><p>In this paper, we develop a methodology for the dynamic Bayesian analysis<span> of generalized odds ratios in contingency tables. It is a standard practice to assume a normal distribution for the random effects in the dynamic system equations. Nevertheless, the normality assumption may be unrealistic in some applications and hence the validity of inferences can be dubious. Therefore, we assume a multivariate skew-normal distribution for the error terms in the system equation at each step. Moreover, we introduce a moving average approach to elicit the hyperparameters. Both simulated data and real data are analyzed to illustrate the application of this methodology.</span></p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"27 ","pages":"Pages 1-11"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2015.05.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}