Pub Date : 1993-09-01DOI: 10.1111/J.2517-6161.1993.TB01473.X
B. McCabe, S. Leybourne
This paper addresses the problem of testing for purely random parameter variation in nonlinear regression models. Based on different approximations to the true density of the data, score-type tests are constructed and their asymptotic distributions are derived. The local power of the tests is investigated both theoretically and via Monte Carlo simulation. An empirical testing example, involving a well-known non-linear aggregate demand for money function, is also given
{"title":"Testing for Parameter Variation in Non-Linear Regression Models","authors":"B. McCabe, S. Leybourne","doi":"10.1111/J.2517-6161.1993.TB01473.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01473.X","url":null,"abstract":"This paper addresses the problem of testing for purely random parameter variation in nonlinear regression models. Based on different approximations to the true density of the data, score-type tests are constructed and their asymptotic distributions are derived. The local power of the tests is investigated both theoretically and via Monte Carlo simulation. An empirical testing example, involving a well-known non-linear aggregate demand for money function, is also given","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"1 1","pages":"133-144"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89832673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-09-01DOI: 10.1111/J.2517-6161.1993.TB01941.X
S. Coles
Asymptotic models for extremes of random processes often form the basis for estimating the extremal behaviour of environmental phenomena. Most such phenomena have a spatial dimension, and the aim of this paper is to develop a procedure for modelling in continuous space the spatial dependence within extreme events. A principal objective in the analysis-as with other current research on extremes-is to base inference on as much of the available data as possible. The modelling procedures are justified on simulated data and subsequently applied to a series of rainfall data
{"title":"Regional Modelling of Extreme Storms Via Max‐Stable Processes","authors":"S. Coles","doi":"10.1111/J.2517-6161.1993.TB01941.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01941.X","url":null,"abstract":"Asymptotic models for extremes of random processes often form the basis for estimating the extremal behaviour of environmental phenomena. Most such phenomena have a spatial dimension, and the aim of this paper is to develop a procedure for modelling in continuous space the spatial dependence within extreme events. A principal objective in the analysis-as with other current research on extremes-is to base inference on as much of the available data as possible. The modelling procedures are justified on simulated data and subsequently applied to a series of rainfall data","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"39 1","pages":"797-816"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88533206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-09-01DOI: 10.1111/J.2517-6161.1993.TB01948.X
R. J. Kelly, T. Mathew
A linear model with two variance components is considered, one variance component (say, σ 1 2 ≥0) corresponding to a random effect, and a second variance component (say, σ 2 >0) corresponding to the experimental errors. A class of invariant quadratic estimators (IQEs) is characterized, having uniformly smaller mean-squared error (MSE), and uniformly smaller probability of negativity, compared with the analysis-of-variance (ANOVA) estimator of σ 1 2
{"title":"Improved estimators of variance components with smaller probability of negativity","authors":"R. J. Kelly, T. Mathew","doi":"10.1111/J.2517-6161.1993.TB01948.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01948.X","url":null,"abstract":"A linear model with two variance components is considered, one variance component (say, σ 1 2 ≥0) corresponding to a random effect, and a second variance component (say, σ 2 >0) corresponding to the experimental errors. A class of invariant quadratic estimators (IQEs) is characterized, having uniformly smaller mean-squared error (MSE), and uniformly smaller probability of negativity, compared with the analysis-of-variance (ANOVA) estimator of σ 1 2","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"90 1","pages":"897-911"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74858928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-09-01DOI: 10.1111/J.2517-6161.1993.TB01945.X
C. Queen, Jim Q. Smith
Multiregression dynamic models are defined to preserve certain conditional independence structures over time across a multivariate the series. They are non-Gaussian and yet they can often be updated in closed form. The first two moments of their one-step-ahead forecast distribution can tie easily calculated. Furthermore, they can be built to contain all the features of the univariate dynamic linear model and promise more efficient identification of causal structures in a time series than has been possible in the past
{"title":"Multiregression dynamic models","authors":"C. Queen, Jim Q. Smith","doi":"10.1111/J.2517-6161.1993.TB01945.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01945.X","url":null,"abstract":"Multiregression dynamic models are defined to preserve certain conditional independence structures over time across a multivariate the series. They are non-Gaussian and yet they can often be updated in closed form. The first two moments of their one-step-ahead forecast distribution can tie easily calculated. Furthermore, they can be built to contain all the features of the univariate dynamic linear model and promise more efficient identification of causal structures in a time series than has been possible in the past","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"71 1","pages":"849-870"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83613741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-07-01DOI: 10.1111/J.2517-6161.1993.TB01927.X
D. Andrews, J. Stafford
SUMMARY This paper describes a collection of procedures for the systematic computation of asymptotic expansions that are common in statistical theory and practice: expansions of functions of sums of independent and identically distributed random variables. The procedures permit the expansion of maximum likelihood estimates, the associated deviance or drop in likelihood and more general functions of random variables with distributions involving one or more parameters. The procedures are illustrated with examples involving general and specific laws. Much of statistical theory and practice is based on asymptotic expansions. Many programs are available to assist in the numerical evaluation of such expansions, but there is a need for computational tools to assist in their derivation and symbolic evaluation. Heller (1991) shows how symbolic calculation may be used in a wide variety of statistical problems. Kendall (1988, 1990) gives procedures for the symbolic computation of expressions in the analysis of the diffusion of Euclidean shape. Silverman and Young (1987) use computer algebra to evaluate criteria on which the decision to smooth a bootstrap distribution is based. Young and Daniels (1990) apply symbolic computation to evaluate expressions in the assessment of bootstrap bias. Venables (1985) utilizes symbolic computation heavily to obtain expansions of maximum marginal likelihood estimates, most notably Fisher's A-statistic. Barndorff-Nielsen and Blasild (1986) describe procedures for the numerical calculation of Bartlett factors in cases where cumulants of the likelihood function may be specified. Most of these references involve the evaluation of complicated formulae in particular cases and not the derivation of the formulae themselves. Here we give general procedures for both the derivation of formulae and their evaluation in specific cases. The derivation of asymptotic expansions is typically a simple but laborious task. Consider, for example, the calculation of the expectation of the likelihood ratio test statistic for a one-parameter family to order 1/n. This may be accomplished in general
{"title":"Tools for the symbolic computation of asymptotic expansions","authors":"D. Andrews, J. Stafford","doi":"10.1111/J.2517-6161.1993.TB01927.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01927.X","url":null,"abstract":"SUMMARY This paper describes a collection of procedures for the systematic computation of asymptotic expansions that are common in statistical theory and practice: expansions of functions of sums of independent and identically distributed random variables. The procedures permit the expansion of maximum likelihood estimates, the associated deviance or drop in likelihood and more general functions of random variables with distributions involving one or more parameters. The procedures are illustrated with examples involving general and specific laws. Much of statistical theory and practice is based on asymptotic expansions. Many programs are available to assist in the numerical evaluation of such expansions, but there is a need for computational tools to assist in their derivation and symbolic evaluation. Heller (1991) shows how symbolic calculation may be used in a wide variety of statistical problems. Kendall (1988, 1990) gives procedures for the symbolic computation of expressions in the analysis of the diffusion of Euclidean shape. Silverman and Young (1987) use computer algebra to evaluate criteria on which the decision to smooth a bootstrap distribution is based. Young and Daniels (1990) apply symbolic computation to evaluate expressions in the assessment of bootstrap bias. Venables (1985) utilizes symbolic computation heavily to obtain expansions of maximum marginal likelihood estimates, most notably Fisher's A-statistic. Barndorff-Nielsen and Blasild (1986) describe procedures for the numerical calculation of Bartlett factors in cases where cumulants of the likelihood function may be specified. Most of these references involve the evaluation of complicated formulae in particular cases and not the derivation of the formulae themselves. Here we give general procedures for both the derivation of formulae and their evaluation in specific cases. The derivation of asymptotic expansions is typically a simple but laborious task. Consider, for example, the calculation of the expectation of the likelihood ratio test statistic for a one-parameter family to order 1/n. This may be accomplished in general","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"175 1","pages":"613-627"},"PeriodicalIF":0.0,"publicationDate":"1993-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80328031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-07-01DOI: 10.1111/J.2517-6161.1993.TB01928.X
D. Gamerman, H. Migon
An analysis of a time series of cross-sectional data is considered under a Bayesian perspective. Information is modelled in terms of prior distributions and stratified parametric linear models developed by Lindley and Smith and dynamic linear models developed by Harrison and Stevens are merged into a general framework. This is shown to include many models proposed in econometrics and experimental design. Properties of the model are derived and shrinkage estimators reassessed. Evolution, smoothing and passage of data information through the levels of the hierarchy are discussed. Inference with an unknown scalar observation variance is drawn and an extension to the non-linear case is proposed
{"title":"Dynamic Hierarchical Models","authors":"D. Gamerman, H. Migon","doi":"10.1111/J.2517-6161.1993.TB01928.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01928.X","url":null,"abstract":"An analysis of a time series of cross-sectional data is considered under a Bayesian perspective. Information is modelled in terms of prior distributions and stratified parametric linear models developed by Lindley and Smith and dynamic linear models developed by Harrison and Stevens are merged into a general framework. This is shown to include many models proposed in econometrics and experimental design. Properties of the model are derived and shrinkage estimators reassessed. Evolution, smoothing and passage of data information through the levels of the hierarchy are discussed. Inference with an unknown scalar observation variance is drawn and an extension to the non-linear case is proposed","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"86 1","pages":"629-642"},"PeriodicalIF":0.0,"publicationDate":"1993-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76156117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-07-01DOI: 10.1111/J.2517-6161.1993.TB01929.X
R. Beran, P. Hall
In several important statistical problems, prediction intervals and confidence intervals can be constructed with coverage levels which are known precisely but cannot be rendered equal to predetermined levels such as 0.95. One solution to this difficulty is to interpolate between such intervals. We show that simple linear interpolation reduces the order of coverage error, but that higher orders of interpolation produce no further improvement. The error is reduced by a factor n -1 for prediction intervals and n -1/2 for confidence intervals, where n denotes sample size. In the case of confidence intervals for quantiles, linear interpolation provides particularly accurate intervals which err on the side of conservatism
{"title":"Interpolated Nonparametric Prediction Intervals and Confidence Intervals","authors":"R. Beran, P. Hall","doi":"10.1111/J.2517-6161.1993.TB01929.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01929.X","url":null,"abstract":"In several important statistical problems, prediction intervals and confidence intervals can be constructed with coverage levels which are known precisely but cannot be rendered equal to predetermined levels such as 0.95. One solution to this difficulty is to interpolate between such intervals. We show that simple linear interpolation reduces the order of coverage error, but that higher orders of interpolation produce no further improvement. The error is reduced by a factor n -1 for prediction intervals and n -1/2 for confidence intervals, where n denotes sample size. In the case of confidence intervals for quantiles, linear interpolation provides particularly accurate intervals which err on the side of conservatism","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"50 1","pages":"643-652"},"PeriodicalIF":0.0,"publicationDate":"1993-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89945327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-07-01DOI: 10.1111/J.2517-6161.1993.TB01925.X
Y. Vardi, D. Lee
The problem of recovering an input signal from a blurred output, in an input-output system with linear distortion, is ubiquitous in science and technology. When the blurred output is not degraded by statistical noise the problem is entirely deterministic and amounts to a mathematical inversion of a linear system with positive parameters, subject to positivity constraints on the solution. We show that all such linear inverse problems with positivity restrictions (LININPOS problems for short) can be interpreted as statistical estimation problems from incomplete data based on infinitely large'samples', and that maximum likelihood (ML) estimation and the EM algorithm provide a straightforward method of solution for such problems
{"title":"From image deblurring to optimal investments : maximum likelihood solutions for positive linear inverse problems","authors":"Y. Vardi, D. Lee","doi":"10.1111/J.2517-6161.1993.TB01925.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01925.X","url":null,"abstract":"The problem of recovering an input signal from a blurred output, in an input-output system with linear distortion, is ubiquitous in science and technology. When the blurred output is not degraded by statistical noise the problem is entirely deterministic and amounts to a mathematical inversion of a linear system with positive parameters, subject to positivity constraints on the solution. We show that all such linear inverse problems with positivity restrictions (LININPOS problems for short) can be interpreted as statistical estimation problems from incomplete data based on infinitely large'samples', and that maximum likelihood (ML) estimation and the EM algorithm provide a straightforward method of solution for such problems","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"12 1","pages":"569-598"},"PeriodicalIF":0.0,"publicationDate":"1993-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89563941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-07-01DOI: 10.1111/J.2517-6161.1992.TB01443.X
C. Geyer, E. Thompson
Maximum likelihood estimates (MLEs) in autologistic models and other exponential family models for dependent data can be calculated with Markov chain Monte Carlo methods (the Metropolis algorithm or the Gibbs sampler), which simulate ergodic Markov chains having equilibrium distributions in the model. From one realization of such a Markov chain, a Monte Carlo approximant to the whole likelihood function can be constructed. The parameter value (if any) maximizing this function approximates the MLE
{"title":"Constrained Monte Carlo Maximum Likelihood for Dependent Data","authors":"C. Geyer, E. Thompson","doi":"10.1111/J.2517-6161.1992.TB01443.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1992.TB01443.X","url":null,"abstract":"Maximum likelihood estimates (MLEs) in autologistic models and other exponential family models for dependent data can be calculated with Markov chain Monte Carlo methods (the Metropolis algorithm or the Gibbs sampler), which simulate ergodic Markov chains having equilibrium distributions in the model. From one realization of such a Markov chain, a Monte Carlo approximant to the whole likelihood function can be constructed. The parameter value (if any) maximizing this function approximates the MLE","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"126 1","pages":"657-683"},"PeriodicalIF":0.0,"publicationDate":"1992-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80217762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-07-01DOI: 10.1111/J.2517-6161.1992.TB01450.X
Chris Carter, G. Eagleson
SUMMARY We compare two estimators of error variance, both based on quadratic forms in the residuals about smoothing spline fits to data. The estimators are compared over the whole range of values of the smoothing parameter as well as for data-based choices of the smoothing parameter. We show that the commonly used estimator of variance has the serious drawback of underestimating the error variance for small choices of the smoothing parameter. This drawback is not shared by a simple, but more computationally intensive, alternative.
{"title":"A Comparison of Variance Estimators in Nonparametric Regression","authors":"Chris Carter, G. Eagleson","doi":"10.1111/J.2517-6161.1992.TB01450.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1992.TB01450.X","url":null,"abstract":"SUMMARY We compare two estimators of error variance, both based on quadratic forms in the residuals about smoothing spline fits to data. The estimators are compared over the whole range of values of the smoothing parameter as well as for data-based choices of the smoothing parameter. We show that the commonly used estimator of variance has the serious drawback of underestimating the error variance for small choices of the smoothing parameter. This drawback is not shared by a simple, but more computationally intensive, alternative.","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"25 1","pages":"773-780"},"PeriodicalIF":0.0,"publicationDate":"1992-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89696069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}