Kernel density estimators due to boundary effects are often not consistent when estimating a density near a finite endpoint of the support of the density to be estimated. To address this, researchers have proposed the application of an optimal bandwidth to balance the bias-variance trade-off in estimation of a finite population mean. This, however, does not eliminate the boundary bias. In this paper weighting method of compensating for nonresponse is proposed. Asymptotic properties of the proposed estimator of the population mean are derived. Under mild assumptions, the estimator is shown to be asymptotically consistent.
{"title":"Boundary Bias Correction Using Weighting Method in Presence of Nonresponse in Two-Stage Cluster Sampling","authors":"Nelson Kiprono Bii, C. O. Onyango, J. Odhiambo","doi":"10.1155/2019/6812795","DOIUrl":"https://doi.org/10.1155/2019/6812795","url":null,"abstract":"Kernel density estimators due to boundary effects are often not consistent when estimating a density near a finite endpoint of the support of the density to be estimated. To address this, researchers have proposed the application of an optimal bandwidth to balance the bias-variance trade-off in estimation of a finite population mean. This, however, does not eliminate the boundary bias. In this paper weighting method of compensating for nonresponse is proposed. Asymptotic properties of the proposed estimator of the population mean are derived. Under mild assumptions, the estimator is shown to be asymptotically consistent.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/6812795","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46965961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we are interested in estimating several quantiles simultaneously in a regression context via the Bayesian approach. Assuming that the error term has an asymmetric Laplace distribution and using the relation between two distinct quantiles of this distribution, we propose a simple fully Bayesian method that satisfies the noncrossing property of quantiles. For implementation, we use Metropolis-Hastings within Gibbs algorithm to sample unknown parameters from their full conditional distribution. The performance and the competitiveness of the underlying method with other alternatives are shown in simulated examples.
{"title":"Fully Bayesian Estimation of Simultaneous Regression Quantiles under Asymmetric Laplace Distribution Specification","authors":"Josephine Merhi Bleik","doi":"10.1155/2019/8610723","DOIUrl":"https://doi.org/10.1155/2019/8610723","url":null,"abstract":"In this paper, we are interested in estimating several quantiles simultaneously in a regression context via the Bayesian approach. Assuming that the error term has an asymmetric Laplace distribution and using the relation between two distinct quantiles of this distribution, we propose a simple fully Bayesian method that satisfies the noncrossing property of quantiles. For implementation, we use Metropolis-Hastings within Gibbs algorithm to sample unknown parameters from their full conditional distribution. The performance and the competitiveness of the underlying method with other alternatives are shown in simulated examples.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/8610723","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41938369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
1Department of Mathematics and Statistics, Georgia State University, Atlanta, GA, USA 2Department of Mathematics and Statistics, Auburn University, Auburn, AL, USA 3Division of Biostatistics, Department of Public Health Sciences, University of California, California, Davis, CA, USA 4Department of Statistics, Purdue University, West Lafayette, IN, USA 5Department of Internal Medicine, Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
{"title":"New Advances in Biostatistics","authors":"Yichuan Zhao, A. Abebe, L. Qi, M. Zhang, Xu Zhang","doi":"10.1155/2019/1352310","DOIUrl":"https://doi.org/10.1155/2019/1352310","url":null,"abstract":"1Department of Mathematics and Statistics, Georgia State University, Atlanta, GA, USA 2Department of Mathematics and Statistics, Auburn University, Auburn, AL, USA 3Division of Biostatistics, Department of Public Health Sciences, University of California, California, Davis, CA, USA 4Department of Statistics, Purdue University, West Lafayette, IN, USA 5Department of Internal Medicine, Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/1352310","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48996910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the usual quantile regression setting, the distribution of the response given the explanatory variables is unspecified. In this work, the distribution is specified and we introduce new link functions to directly model specified quantiles of seven 1–parameter continuous distributions. Using the vector generalized linear and additive model (VGLM/VGAM) framework, we transform certain prespecified quantiles to become linear or additive predictors. Our parametric quantile regression approach adopts VGLMs/VGAMs because they can handle multiple linear predictors and encompass many distributions beyond the exponential family. Coupled with the ability to fit smoothers, the underlying strong assumption of the distribution can be relaxed so as to offer a semiparametric–type analysis. By allowing multiple linear and additive predictors simultaneously, the quantile crossing problem can be avoided by enforcing parallelism constraint matrices. This article gives details of a software implementation called the VGAMextra package for R. Both the data and recently developed software used in this paper are freely downloadable from the internet.
{"title":"New Link Functions for Distribution–Specific Quantile Regression Based on Vector Generalized Linear and Additive Models","authors":"V. Miranda-Soberanis, T. Yee","doi":"10.1155/2019/3493628","DOIUrl":"https://doi.org/10.1155/2019/3493628","url":null,"abstract":"In the usual quantile regression setting, the distribution of the response given the explanatory variables is unspecified. In this work, the distribution is specified and we introduce new link functions to directly model specified quantiles of seven 1–parameter continuous distributions. Using the vector generalized linear and additive model (VGLM/VGAM) framework, we transform certain prespecified quantiles to become linear or additive predictors. Our parametric quantile regression approach adopts VGLMs/VGAMs because they can handle multiple linear predictors and encompass many distributions beyond the exponential family. Coupled with the ability to fit smoothers, the underlying strong assumption of the distribution can be relaxed so as to offer a semiparametric–type analysis. By allowing multiple linear and additive predictors simultaneously, the quantile crossing problem can be avoided by enforcing parallelism constraint matrices. This article gives details of a software implementation called the VGAMextra package for R. Both the data and recently developed software used in this paper are freely downloadable from the internet.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/3493628","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44684457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using bootstrap method, we have constructed nonparametric prediction intervals for Conditional Value-at-Risk for returns that admit a heteroscedastic location-scale model where the location and scale functions are smooth, and the function of the error term is unknown and is assumed to be uncorrelated to the independent variable. The prediction interval performs well for large sample sizes and is relatively small, which is consistent with what is obtainable in the literature.
{"title":"Bootstrapping Nonparametric Prediction Intervals for Conditional Value-at-Risk with Heteroscedasticity","authors":"E. Torsen, Lema Logamou Seknewna","doi":"10.1155/2019/7691841","DOIUrl":"https://doi.org/10.1155/2019/7691841","url":null,"abstract":"Using bootstrap method, we have constructed nonparametric prediction intervals for Conditional Value-at-Risk for returns that admit a heteroscedastic location-scale model where the location and scale functions are smooth, and the function of the error term is unknown and is assumed to be uncorrelated to the independent variable. The prediction interval performs well for large sample sizes and is relatively small, which is consistent with what is obtainable in the literature.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/7691841","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48461295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael J. Adjabui, Nathaniel K. Howard, M. Akamba
The paper addresses the issue of identifying the maximum safe dose in the context of noninferiority trials where several doses of toxicological compounds exist. Statistical methodology for identifying the maximum safe dose is available for three-arm noninferiority designs with only one experimental drug treatment. Extension of this methodology for several experimental groups exists but with multiplicity adjustment. However, if the experimental or the treatment groups can be ordered a priori according to their treatment effect, then multiplicity adjustment is unneeded. Assuming homogeneity of variances across dose group in normality settings, we employed the generalized Fieller’s confidence interval method in a multiple comparison stepwise procedure by incorporating the partitioning principle in order to control the familywise error rate (FWER). Simulation results revealed that the procedure properly controlled the FWER in strong sense. Also, the power of our procedure increases with increasing sample size and the ratio of mean differences. We illustrate our procedure with mutagenicity dataset from a clinical study.
{"title":"Biostatistical Assessment of Mutagenicity Studies: A Stepwise Confidence Procedure","authors":"Michael J. Adjabui, Nathaniel K. Howard, M. Akamba","doi":"10.1155/2019/3249097","DOIUrl":"https://doi.org/10.1155/2019/3249097","url":null,"abstract":"The paper addresses the issue of identifying the maximum safe dose in the context of noninferiority trials where several doses of toxicological compounds exist. Statistical methodology for identifying the maximum safe dose is available for three-arm noninferiority designs with only one experimental drug treatment. Extension of this methodology for several experimental groups exists but with multiplicity adjustment. However, if the experimental or the treatment groups can be ordered a priori according to their treatment effect, then multiplicity adjustment is unneeded. Assuming homogeneity of variances across dose group in normality settings, we employed the generalized Fieller’s confidence interval method in a multiple comparison stepwise procedure by incorporating the partitioning principle in order to control the familywise error rate (FWER). Simulation results revealed that the procedure properly controlled the FWER in strong sense. Also, the power of our procedure increases with increasing sample size and the ratio of mean differences. We illustrate our procedure with mutagenicity dataset from a clinical study.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/3249097","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49086927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using the Pairwise Absolute Clustering and Sparsity (PACS) penalty, we proposed the regularized quantile regression QR method (QR-PACS). The PACS penalty achieves the elimination of insignificant predictors and the combination of predictors with indistinguishable coefficients (IC), which are the two issues raised in the searching for the true model. QR-PACS extends PACS from mean regression settings to QR settings. The paper shows that QR-PACS can yield promising predictive precision as well as identifying related groups in both simulation and real data.
{"title":"Group Identification and Variable Selection in Quantile Regression","authors":"A. Alkenani, Basim Shlaibah Msallam","doi":"10.1155/2019/8504174","DOIUrl":"https://doi.org/10.1155/2019/8504174","url":null,"abstract":"Using the Pairwise Absolute Clustering and Sparsity (PACS) penalty, we proposed the regularized quantile regression QR method (QR-PACS). The PACS penalty achieves the elimination of insignificant predictors and the combination of predictors with indistinguishable coefficients (IC), which are the two issues raised in the searching for the true model. QR-PACS extends PACS from mean regression settings to QR settings. The paper shows that QR-PACS can yield promising predictive precision as well as identifying related groups in both simulation and real data.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/8504174","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45289024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article aims to introduce a generalization of the inverse Rayleigh distribution known as exponentiated inverse Rayleigh distribution (EIRD) which extends a more flexible distribution for modeling life data. Some statistical properties of the EIRD are investigated, such as mode, quantiles, moments, reliability, and hazard function. We describe different methods of parametric estimations of EIRD discussed by using maximum likelihood estimators, percentile based estimators, least squares estimators, and weighted least squares estimators and compare those estimates using extensive numerical simulations. The performances of the proposed methods of estimation are compared by Monte Carlo simulations for both small and large samples. To illustrate these methods in a practical application, a data analysis of real-world coating weights of iron sheets is obtained from the ALAF industry, Tanzania, during January-March, 2018. ALAF industry uses aluminum-zinc galvanization technology in the coating process. This application identifies the EIRD as a better model than other well-known distributions in modeling lifetime data.
{"title":"Exponentiated Inverse Rayleigh Distribution and an Application to Coating Weights of Iron Sheets Data","authors":"G. S. Rao, S. Mbwambo","doi":"10.1155/2019/7519429","DOIUrl":"https://doi.org/10.1155/2019/7519429","url":null,"abstract":"This article aims to introduce a generalization of the inverse Rayleigh distribution known as exponentiated inverse Rayleigh distribution (EIRD) which extends a more flexible distribution for modeling life data. Some statistical properties of the EIRD are investigated, such as mode, quantiles, moments, reliability, and hazard function. We describe different methods of parametric estimations of EIRD discussed by using maximum likelihood estimators, percentile based estimators, least squares estimators, and weighted least squares estimators and compare those estimates using extensive numerical simulations. The performances of the proposed methods of estimation are compared by Monte Carlo simulations for both small and large samples. To illustrate these methods in a practical application, a data analysis of real-world coating weights of iron sheets is obtained from the ALAF industry, Tanzania, during January-March, 2018. ALAF industry uses aluminum-zinc galvanization technology in the coating process. This application identifies the EIRD as a better model than other well-known distributions in modeling lifetime data.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/7519429","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45516875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we use a statistical mechanical model as a paradigm for educational choices when the reference population is partitioned according to the socioeconomic attributes of gender and residence. We study how educational attainment is influenced by socioeconomic attributes of gender and residence for five selected developing countries. The model has a social and a private incentive part with coefficients measuring the influence individuals have on each other and the external influence on individuals, respectively. The methods of partial least squares and the ordinary least squares are, respectively, used to estimate the parameters of the interacting and the noninteracting models. This work differs from the previous work that motivated this work in the following sense: (a) the reference population is divided into subgroups with unequal subgroup sizes, (b) the proportion of individuals in each of the subgroups may depend on the population size N, and (c) the method of partial least squares is used for estimating the parameters of the model with social interaction as opposed to the least squares method used in the earlier work.
{"title":"Parameter Evaluation for a Statistical Mechanical Model for Binary Choice with Social Interaction","authors":"A. Opoku, Godwin Osabutey, C. Kwofie","doi":"10.1155/2019/3435626","DOIUrl":"https://doi.org/10.1155/2019/3435626","url":null,"abstract":"In this paper we use a statistical mechanical model as a paradigm for educational choices when the reference population is partitioned according to the socioeconomic attributes of gender and residence. We study how educational attainment is influenced by socioeconomic attributes of gender and residence for five selected developing countries. The model has a social and a private incentive part with coefficients measuring the influence individuals have on each other and the external influence on individuals, respectively. The methods of partial least squares and the ordinary least squares are, respectively, used to estimate the parameters of the interacting and the noninteracting models. This work differs from the previous work that motivated this work in the following sense: (a) the reference population is divided into subgroups with unequal subgroup sizes, (b) the proportion of individuals in each of the subgroups may depend on the population size N, and (c) the method of partial least squares is used for estimating the parameters of the model with social interaction as opposed to the least squares method used in the earlier work.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/3435626","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42523173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Without the ability to use research tools and procedures that yield consistent measurements, researchers would be unable to draw conclusions, formulate theories, or make claims about generalizability of their results. In statistics, the coefficient of variation is commonly used as the index of reliability of measurements. Thus, comparing coefficients of variation is of special interest. Moreover, the lognormal distribution has been frequently used for modeling data from many fields such as health and medical research. In this paper, we proposed a simulated Bartlett corrected likelihood ratio approach to obtain inference concerning the ratio of two coefficients of variation for lognormal distribution. Simulation studies show that the proposed method is extremely accurate even when the sample size is small.
{"title":"Improved Small Sample Inference on the Ratio of Two Coefficients of Variation of Two Independent Lognormal Distributions","authors":"A. Wong, L. Jiang","doi":"10.1155/2019/7173416","DOIUrl":"https://doi.org/10.1155/2019/7173416","url":null,"abstract":"Without the ability to use research tools and procedures that yield consistent measurements, researchers would be unable to draw conclusions, formulate theories, or make claims about generalizability of their results. In statistics, the coefficient of variation is commonly used as the index of reliability of measurements. Thus, comparing coefficients of variation is of special interest. Moreover, the lognormal distribution has been frequently used for modeling data from many fields such as health and medical research. In this paper, we proposed a simulated Bartlett corrected likelihood ratio approach to obtain inference concerning the ratio of two coefficients of variation for lognormal distribution. Simulation studies show that the proposed method is extremely accurate even when the sample size is small.","PeriodicalId":44760,"journal":{"name":"Journal of Probability and Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/7173416","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42501298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}