Pub Date : 2015-05-01DOI: 10.1016/j.stamet.2014.11.004
A.I. Khuri , S. Mukhopadhyay , M.A. Khuri
Bernstein polynomials have many interesting properties. In statistics, they were mainly used to estimate density functions and regression relationships. The main objective of this paper is to promote further use of Bernstein polynomials in statistics. This includes (1) providing a high-level approximation of the moments of a continuous function of a random variable , and (2) proving Jensen’s inequality concerning a convex function without requiring second differentiability of the function. The approximation in (1) is demonstrated to be quite superior to the delta method, which is used to approximate the variance of with the added assumption of differentiability of the function. Two numerical examples are given to illustrate the application of the proposed methodology in (1).
{"title":"Approximating moments of continuous functions of random variables using Bernstein polynomials","authors":"A.I. Khuri , S. Mukhopadhyay , M.A. Khuri","doi":"10.1016/j.stamet.2014.11.004","DOIUrl":"10.1016/j.stamet.2014.11.004","url":null,"abstract":"<div><p><span>Bernstein polynomials<span> have many interesting properties. In statistics, they were mainly used to estimate density functions and regression relationships. The main objective of this paper is to promote further use of Bernstein polynomials in statistics. This includes (1) providing a high-level approximation of the moments of a continuous function </span></span><span><math><mi>g</mi><mrow><mo>(</mo><mi>X</mi><mo>)</mo></mrow></math></span> of a random variable <span><math><mi>X</mi></math></span>, and (2) proving <em>Jensen’s inequality</em><span> concerning a convex function<span> without requiring second differentiability of the function. The approximation in (1) is demonstrated to be quite superior to the </span></span><span><em>delta method</em></span>, which is used to approximate the variance of <span><math><mi>g</mi><mrow><mo>(</mo><mi>X</mi><mo>)</mo></mrow></math></span> with the added assumption of differentiability of the function. Two numerical examples are given to illustrate the application of the proposed methodology in (1).</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"24 ","pages":"Pages 37-51"},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.11.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-01DOI: 10.1016/j.stamet.2014.08.002
J.R.M. Hosking , N. Balakrishnan
We show that if a linear combination of expectations of order statistics has mean zero across all random variables that have finite mean, then the linear combination is identically zero. A consequence of this result is that any functional of a probability distribution can have essentially only one unbiased -estimator (i.e., an estimator that has the form of a linear combination of order statistics): if two such linear combinations have the same expectation then they must be algebraically identical. We use this result to prove the equivalence of two statistics that have been proposed as estimators of the L-moments introduced by Hosking (1990), and to provide alternative means of computing estimators of the trimmed L-moments introduced by Elamir and Seheult (2003). We also make comparisons of the speed of various methods for computing estimators of L-moments and trimmed L-moments.
{"title":"A uniqueness result for L-estimators, with applications to L-moments","authors":"J.R.M. Hosking , N. Balakrishnan","doi":"10.1016/j.stamet.2014.08.002","DOIUrl":"10.1016/j.stamet.2014.08.002","url":null,"abstract":"<div><p><span>We show that if a linear combination<span> of expectations of order statistics has mean zero across all random variables that have finite mean, then the linear combination is identically zero. A consequence of this result is that any functional of a probability distribution can have essentially only one unbiased </span></span><span><math><mi>L</mi></math></span>-estimator (i.e., an estimator that has the form of a linear combination of order statistics): if two such linear combinations have the same expectation then they must be algebraically identical. We use this result to prove the equivalence of two statistics that have been proposed as estimators of the <em>L</em>-moments introduced by Hosking (1990), and to provide alternative means of computing estimators of the trimmed <em>L</em>-moments introduced by Elamir and Seheult (2003). We also make comparisons of the speed of various methods for computing estimators of <em>L</em>-moments and trimmed <em>L</em>-moments.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"24 ","pages":"Pages 69-80"},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.08.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-01DOI: 10.1016/j.stamet.2014.11.003
Shakhawat Hossain , S. Ejaz Ahmed , Kjell A. Doksum
We consider estimation in generalized linear models when there are many potential predictors and some of them may not have influence on the response of interest. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients to a candidate linear subspace based on subject matter or prior knowledge, we investigate the relative performances of Stein type shrinkage, pretest, and penalty estimators (GLM, adaptive GLM, and SCAD) with respect to the unrestricted maximum likelihood estimator (MLE). The asymptotic properties of the pretest and shrinkage estimators including the derivation of asymptotic distributional biases and risks are established. In particular, we give conditions under which the shrinkage estimators are asymptotically more efficient than the unrestricted MLE. A Monte Carlo simulation study shows that the mean squared error (MSE) of an adaptive shrinkage estimator is comparable to the MSE of the penalty estimators in many situations and in particular performs better than the penalty estimators when the dimension of the restricted parameter space is large. The Steinian shrinkage and penalty estimators all improve substantially on the unrestricted MLE. A real data set analysis is also presented to compare the suggested methods.
{"title":"Shrinkage, pretest, and penalty estimators in generalized linear models","authors":"Shakhawat Hossain , S. Ejaz Ahmed , Kjell A. Doksum","doi":"10.1016/j.stamet.2014.11.003","DOIUrl":"10.1016/j.stamet.2014.11.003","url":null,"abstract":"<div><p><span>We consider estimation in generalized linear models<span> when there are many potential predictors and some of them may not have influence on the response of interest. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients<span> to a candidate linear subspace based on subject matter or prior knowledge, we investigate the relative performances of Stein type shrinkage, pretest, and penalty estimators (</span></span></span><span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>GLM, adaptive <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span><span><span>GLM, and SCAD) with respect to the unrestricted maximum likelihood estimator (MLE). The </span>asymptotic properties<span><span> of the pretest and shrinkage estimators including the derivation of asymptotic distributional biases and risks are established. In particular, we give conditions under which the shrinkage estimators are asymptotically more efficient than the unrestricted MLE. A </span>Monte Carlo simulation study shows that the mean squared error (MSE) of an adaptive shrinkage estimator is comparable to the MSE of the penalty estimators in many situations and in particular performs better than the penalty estimators when the dimension of the restricted parameter space is large. The Steinian shrinkage and penalty estimators all improve substantially on the unrestricted MLE. A real data set analysis is also presented to compare the suggested methods.</span></span></p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"24 ","pages":"Pages 52-68"},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.11.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-01DOI: 10.1016/j.stamet.2014.11.002
M. Rezapour , N. Balakrishnan
In this paper, we consider a heavy-tailed stochastic volatility model , , where the volatility sequence and the iid noise sequence are assumed to be independent, is regularly varying with index , and the ’s to have moments of order less than . Here, we prove that, under certain conditions, the stochastic volatility model inherits the anti-clustering condition of from the volatility sequence . Next, we consider a stochastic volatility model in which is an exponential AR(2) process with regularly varying marginals and show that this model satisfies the regular variation, mixing and anti-clustering conditions in Davis and Hsing (1995).
{"title":"Some properties of stochastic volatility model that are induced by its volatility sequence","authors":"M. Rezapour , N. Balakrishnan","doi":"10.1016/j.stamet.2014.11.002","DOIUrl":"10.1016/j.stamet.2014.11.002","url":null,"abstract":"<div><p>In this paper, we consider a heavy-tailed stochastic volatility model <span><math><msub><mrow><mi>X</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>=</mo><msub><mrow><mi>σ</mi></mrow><mrow><mi>t</mi></mrow></msub><msub><mrow><mi>Z</mi></mrow><mrow><mi>t</mi></mrow></msub></math></span>, <span><math><mi>t</mi><mo>∈</mo><mi>Z</mi></math></span>, where the volatility sequence <span><math><mrow><mo>(</mo><msub><mrow><mi>σ</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>)</mo></mrow></math></span> and the iid noise sequence <span><math><mrow><mo>(</mo><msub><mrow><mi>Z</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>)</mo></mrow></math></span> are assumed to be independent, <span><math><mrow><mo>(</mo><msub><mrow><mi>σ</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>)</mo></mrow></math></span> is regularly varying with index <span><math><mi>α</mi><mo>></mo><mn>0</mn><mspace></mspace></math></span>, and the <span><math><msub><mrow><mi>Z</mi></mrow><mrow><mi>t</mi></mrow></msub></math></span>’s to have moments of order less than <span><math><mi>α</mi><mo>/</mo><mn>2</mn></math></span>. Here, we prove that, under certain conditions, the stochastic volatility model inherits the anti-clustering condition of <span><math><mrow><mo>(</mo><msub><mrow><mi>X</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>)</mo></mrow></math></span> from the volatility sequence <span><math><mrow><mo>(</mo><msub><mrow><mi>σ</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>)</mo></mrow></math></span>. Next, we consider a stochastic volatility model in which <span><math><mrow><mo>(</mo><msub><mrow><mi>σ</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>)</mo></mrow></math></span><span> is an exponential AR(2) process with regularly varying marginals and show that this model satisfies the regular variation, mixing and anti-clustering conditions in Davis and Hsing (1995).</span></p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"24 ","pages":"Pages 28-36"},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.11.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-01DOI: 10.1016/j.stamet.2014.10.002
A. Satty , H. Mwambi , G. Molenberghs
This paper compares the performance of weighted generalized estimating equations (WGEEs), multiple imputation based on generalized estimating equations (MI-GEEs) and generalized linear mixed models (GLMMs) for analyzing incomplete longitudinal binary data when the underlying study is subject to dropout. The paper aims to explore the performance of the above methods in terms of handling dropouts that are missing at random (MAR). The methods are compared on simulated data. The longitudinal binary data are generated from a logistic regression model, under different sample sizes. The incomplete data are created for three different dropout rates. The methods are evaluated in terms of bias, precision and mean square error in case where data are subject to MAR dropout. In conclusion, across the simulations performed, the MI-GEE method performed better in both small and large sample sizes. Evidently, this should not be seen as formal and definitive proof, but adds to the body of knowledge about the methods’ relative performance. In addition, the methods are compared using data from a randomized clinical trial.
{"title":"Different methods for handling incomplete longitudinal binary outcome due to missing at random dropout","authors":"A. Satty , H. Mwambi , G. Molenberghs","doi":"10.1016/j.stamet.2014.10.002","DOIUrl":"10.1016/j.stamet.2014.10.002","url":null,"abstract":"<div><p>This paper compares the performance of weighted generalized estimating equations (WGEEs), multiple imputation<span><span><span> based on generalized estimating equations (MI-GEEs) and generalized linear mixed models<span> (GLMMs) for analyzing incomplete longitudinal binary data when the underlying study is subject to dropout. The paper aims to explore the performance of the above methods in terms of handling dropouts that are missing at random (MAR). The methods are compared on simulated data. The longitudinal binary data are generated from a </span></span>logistic regression model, under different sample sizes. The incomplete data are created for three different dropout rates. The methods are evaluated in terms of bias, precision and </span>mean square error in case where data are subject to MAR dropout. In conclusion, across the simulations performed, the MI-GEE method performed better in both small and large sample sizes. Evidently, this should not be seen as formal and definitive proof, but adds to the body of knowledge about the methods’ relative performance. In addition, the methods are compared using data from a randomized clinical trial.</span></p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"24 ","pages":"Pages 12-27"},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.10.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-01DOI: 10.1016/j.stamet.2014.09.001
David Han
In accelerated step-stress life tests, the stress levels are allowed to increase at some pre-determined time points such that information on the lifetime parameters can be obtained more quickly than under normal operating conditions. Because there are often multiple causes for the failure of a test unit, such as mechanical or electrical failures, in this article, a step-stress model under time constraint is studied when the lifetimes of different complementary risk factors are independent from exponentiated distributions. Although the baseline distributions can belong to a general class of distributions, including Weibull, Pareto, and Gompertz distributions, particular attention is paid to the case of an exponentiated exponential distribution. Under this setup, the maximum likelihood estimators of the unknown scale and shape parameters of the different causes are derived with the assumption of cumulative damage. Using the asymptotic distributions and the parametric bootstrap method, the confidence intervals for the parameters are then constructed. The precision of the estimates and the performance of the confidence intervals are also assessed through extensive Monte Carlo simulations, and finally, the inference methods discussed here are illustrated with motivating examples.
{"title":"Estimation in step-stress life tests with complementary risks from the exponentiated exponential distribution under time constraint and its applications to UAV data","authors":"David Han","doi":"10.1016/j.stamet.2014.09.001","DOIUrl":"10.1016/j.stamet.2014.09.001","url":null,"abstract":"<div><p><span>In accelerated step-stress life tests, the stress levels are allowed to increase at some pre-determined time points such that information on the lifetime parameters can be obtained more quickly than under normal operating conditions. Because there are often multiple causes for the failure of a test unit, such as mechanical or electrical failures, in this article, a step-stress model under time constraint is studied when the lifetimes of different complementary risk factors are independent from exponentiated distributions. Although the baseline distributions can belong to a general class of distributions, including Weibull, Pareto, and Gompertz distributions, particular attention is paid to the case of an exponentiated </span>exponential distribution<span><span>. Under this setup, the maximum likelihood estimators<span> of the unknown scale and shape parameters of the different causes are derived with the assumption of cumulative damage. Using the asymptotic distributions and the parametric </span></span>bootstrap method<span>, the confidence intervals for the parameters are then constructed. The precision of the estimates and the performance of the confidence intervals are also assessed through extensive Monte Carlo simulations, and finally, the inference methods discussed here are illustrated with motivating examples.</span></span></p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"23 ","pages":"Pages 103-122"},"PeriodicalIF":0.0,"publicationDate":"2015-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.09.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When production processes reach high quality standards they are known as high quality processes. In this situation, the conventional charts (based on 3-sigma limits) used for monitoring non-conforming products have serious drawbacks in detecting changes in due to excess of false alarm risk. In a previous paper, the authors showed a new chart that provides a large improvement over the usual chart in these situations. In this paper, authors propose a new corrected version of a double sampling (DS) control chart for monitoring the proportion of non-conforming presented in the literature for large samples, in order to extend its applicability to the case of small samples. This procedure offers better statistical efficiency (in terms of the average run length) than the previous charts, without increasing the sampling. Tables are provided to aid in the choice of DS parameters. The benefits of the corrected version of a DS chart for monitoring high-quality processes are illustrated with real data.
{"title":"Extending a double sampling control chart for non-conforming proportion in high quality processes to the case of small samples","authors":"Silvia Joekes , Marcelo Smrekar , Emanuel Pimentel Barbosa","doi":"10.1016/j.stamet.2014.09.003","DOIUrl":"10.1016/j.stamet.2014.09.003","url":null,"abstract":"<div><p>When production processes reach high quality standards they are known as high quality processes. In this situation, the conventional <span><math><mi>p</mi></math></span> charts (based on 3-sigma limits) used for monitoring non-conforming products have serious drawbacks in detecting changes in <span><math><mi>p</mi></math></span> due to excess of false alarm risk. In a previous paper, the authors showed a new <span><math><mi>p</mi></math></span> chart that provides a large improvement over the usual <span><math><mi>p</mi></math></span> chart in these situations. In this paper, authors propose a new corrected version of a double sampling (DS) control chart for monitoring the proportion <span><math><mi>p</mi></math></span> of non-conforming presented in the literature for large samples, in order to extend its applicability to the case of small samples. This procedure offers better statistical efficiency (in terms of the average run length) than the previous <span><math><mi>p</mi></math></span> charts, without increasing the sampling. Tables are provided to aid in the choice of DS parameters. The benefits of the corrected version of a DS chart for monitoring high-quality processes are illustrated with real data.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"23 ","pages":"Pages 35-49"},"PeriodicalIF":0.0,"publicationDate":"2015-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.09.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-01DOI: 10.1016/j.stamet.2014.10.003
Peixin Zhao , Yiping Yang
Varying coefficient partially linear models are commonly used for analyzing data measured repeatedly, such as longitudinal data and panel data. In this paper, the testing problem for varying coefficient partially linear models with repeated measurements is investigated. Based on the empirical likelihood method, the test statistics are constructed for some testing problems. The Wilks phenomenon of these test statistics is proved, and then the rejection regions are constructed. Some simulation studies are undertaken to investigate the power of the empirical likelihood based testing method.
{"title":"Semiparametric empirical likelihood tests in varying coefficient partially linear models with repeated measurements","authors":"Peixin Zhao , Yiping Yang","doi":"10.1016/j.stamet.2014.10.003","DOIUrl":"10.1016/j.stamet.2014.10.003","url":null,"abstract":"<div><p><span>Varying coefficient partially linear models are commonly used for analyzing data measured repeatedly, such as longitudinal data and panel data. In this paper, the testing problem for varying coefficient partially linear models with repeated measurements is investigated. Based on the </span>empirical likelihood method, the test statistics are constructed for some testing problems. The Wilks phenomenon of these test statistics is proved, and then the rejection regions are constructed. Some simulation studies are undertaken to investigate the power of the empirical likelihood based testing method.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"23 ","pages":"Pages 73-87"},"PeriodicalIF":0.0,"publicationDate":"2015-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.10.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-01DOI: 10.1016/j.stamet.2014.09.002
Youngseuk Cho, Hokeun Sun, Kyeongjun Lee
Recently, progressive hybrid censoring schemes have become quite popular in a life-testing problem and reliability analysis. However, the limitation of the progressive hybrid censoring scheme is that it cannot be applied when few failures occur before time . In this article, we propose a generalized progressive hybrid censoring scheme, which allows us to observe a pre-specified number of failures. So, the certain number of failures and their survival times are provided all the time. We also derive the exact distribution of the maximum likelihood estimator (MLE) as well as exact confidence interval (CI) for the parameter of the exponential distribution under the generalized progressive hybrid censoring scheme. The results of simulation studies and real-life data analysis are included to illustrate the proposed method.
{"title":"Exact likelihood inference for an exponential parameter under generalized progressive hybrid censoring scheme","authors":"Youngseuk Cho, Hokeun Sun, Kyeongjun Lee","doi":"10.1016/j.stamet.2014.09.002","DOIUrl":"10.1016/j.stamet.2014.09.002","url":null,"abstract":"<div><p><span>Recently, progressive hybrid censoring schemes have become quite popular in a life-testing problem and reliability analysis. However, the limitation of the progressive hybrid censoring scheme is that it cannot be applied when few failures occur before time </span><span><math><mi>T</mi></math></span><span><span><span>. In this article, we propose a generalized progressive hybrid censoring scheme, which allows us to observe a pre-specified number of failures. So, the certain number of failures and their survival times are provided all the time. We also derive the exact distribution of the maximum likelihood estimator (MLE) as well as </span>exact confidence interval (CI) for the parameter of the </span>exponential distribution under the generalized progressive hybrid censoring scheme. The results of simulation studies and real-life data analysis are included to illustrate the proposed method.</span></p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"23 ","pages":"Pages 18-34"},"PeriodicalIF":0.0,"publicationDate":"2015-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.09.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-01DOI: 10.1016/j.stamet.2014.09.004
Tao Chen , Kenneth A. Couch
We consider a method of moments approach for dealing with censoring at zero for data expressed in levels when researchers would like to take logarithms. A Box–Cox transformation is employed. We explore this approach in the context of linear regression where both dependent and independent variables are censored. We contrast this method to two others, (1) dropping records of data containing censored values and (2) assuming normality for censored observations and the residuals in the model. Across the methods considered, where researchers are interested primarily in the slope parameter, estimation bias is consistently reduced using the method of moments approach.
{"title":"An approximation of logarithmic functions in the regression setting","authors":"Tao Chen , Kenneth A. Couch","doi":"10.1016/j.stamet.2014.09.004","DOIUrl":"10.1016/j.stamet.2014.09.004","url":null,"abstract":"<div><p>We consider a method of moments approach for dealing with censoring at zero for data expressed in levels when researchers would like to take logarithms. A Box–Cox transformation is employed. We explore this approach in the context of linear regression where both dependent and independent variables are censored. We contrast this method to two others, (1) dropping records of data containing censored values and (2) assuming normality for censored observations and the residuals in the model. Across the methods considered, where researchers are interested primarily in the slope parameter, estimation bias is consistently reduced using the method of moments approach.</p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":"23 ","pages":"Pages 50-58"},"PeriodicalIF":0.0,"publicationDate":"2015-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.09.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55092512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}