{"title":"Proposal of the vote of thanks in discussion of Cule, M., Samworth, R., and Stewart, M.: Maximum likelihood estimation of a multidimensional logconcave density","authors":"K. Rufibach","doi":"10.5167/UZH-38539","DOIUrl":"https://doi.org/10.5167/UZH-38539","url":null,"abstract":"","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"188 1","pages":"577-578"},"PeriodicalIF":0.0,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76952696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-07-01DOI: 10.1111/J.2517-6161.1996.TB02085.X
S. Coles, J. Tawn
Risk assessment for many hydrological structures requires an estimate of the extremal behaviour of the rainfall regime within a specified catchment region. In most cases it is the spatially aggregated rainfall which is the key process, though in practice only pointwise rainfall measurements from a network of sites over the region are available. In this paper we address the problem of making inferences about the extremal properties of the aggregated process from the pointwise data. Working within the usual extreme value paradigm, a model is derived in which the resulting distribution is determined by the marginal tail behaviour and spatial dependence at extreme levels of the process. Data collected from a region in the south-west of England are used to illustrate the procedure.
{"title":"Modelling extremes of the areal rainfall process.","authors":"S. Coles, J. Tawn","doi":"10.1111/J.2517-6161.1996.TB02085.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1996.TB02085.X","url":null,"abstract":"Risk assessment for many hydrological structures requires an estimate of the extremal behaviour of the rainfall regime within a specified catchment region. In most cases it is the spatially aggregated rainfall which is the key process, though in practice only pointwise rainfall measurements from a network of sites over the region are available. In this paper we address the problem of making inferences about the extremal properties of the aggregated process from the pointwise data. Working within the usual extreme value paradigm, a model is derived in which the resulting distribution is determined by the marginal tail behaviour and spatial dependence at extreme levels of the process. Data collected from a region in the south-west of England are used to illustrate the procedure.","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"24 1","pages":"329-347"},"PeriodicalIF":0.0,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83977882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-07-01DOI: 10.1111/J.2517-6161.1996.TB02087.X
P. Hall, Prakash N. Patil
SUMMARY Concise asymptotic theory is developed for non-linear wavelet estimators of regression means, in the context of general error distributions, general designs, general normalizations in the case of stochastic design, and non-structural assumptions about the mean. The influence of the tail weight of the error distribution is addressed in the setting of choosing threshold and truncation parameters. Mainly, the tail weight is described in an extremely simple way, by a moment condition; previous work on this topic has generally imposed the much more stringent assumption that the error distribution be normal. Different approaches to correction for stochastic design are suggested. These include conventional kernel estimation of the design density, in which case the interaction between the smoothing parameters of the non-linear wavelet estimator and the linear kernel method is described.
{"title":"On the Choice of Smoothing Parameter, Threshold and Truncation in Nonparametric Regression by Non-linear Wavelet Methods","authors":"P. Hall, Prakash N. Patil","doi":"10.1111/J.2517-6161.1996.TB02087.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1996.TB02087.X","url":null,"abstract":"SUMMARY Concise asymptotic theory is developed for non-linear wavelet estimators of regression means, in the context of general error distributions, general designs, general normalizations in the case of stochastic design, and non-structural assumptions about the mean. The influence of the tail weight of the error distribution is addressed in the setting of choosing threshold and truncation parameters. Mainly, the tail weight is described in an extremely simple way, by a moment condition; previous work on this topic has generally imposed the much more stringent assumption that the error distribution be normal. Different approaches to correction for stochastic design are suggested. These include conventional kernel estimation of the design density, in which case the interaction between the smoothing parameters of the non-linear wavelet estimator and the linear kernel method is described.","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"16 1","pages":"361-377"},"PeriodicalIF":0.0,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78987277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-07-01DOI: 10.1111/J.2517-6161.1996.TB02083.X
M. Goldstein, A. O’Hagan
Data arising in the form of expert assessments are received by a decision maker. The decision maker is required to estimate a set of unknown quantities, and receives expert assessments at varying levels of accuracy, on samples of the quantities of interest. We present a Bayes linear analysis of this problem. In the absence of other assessments, the decision maker will accept as his or her current estimate of any single quantity the most accurate received assessment of that quantity. This leads to a sufficiency property which allows a simple decomposition of the error structure of assessments. Bayes linear estimation is then used by the decision maker to estimate each quantity of interest given an arbitrary collection of received assessments. The analysis is motivated throughout by a practical context in which a large company needs to estimate costs for renovation of assets. The methodology is illustrated with a numerical example.
{"title":"Bayes Linear Sufficiency and Systems of Expert Posterior Assessments","authors":"M. Goldstein, A. O’Hagan","doi":"10.1111/J.2517-6161.1996.TB02083.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1996.TB02083.X","url":null,"abstract":"Data arising in the form of expert assessments are received by a decision maker. The decision maker is required to estimate a set of unknown quantities, and receives expert assessments at varying levels of accuracy, on samples of the quantities of interest. We present a Bayes linear analysis of this problem. In the absence of other assessments, the decision maker will accept as his or her current estimate of any single quantity the most accurate received assessment of that quantity. This leads to a sufficiency property which allows a simple decomposition of the error structure of assessments. Bayes linear estimation is then used by the decision maker to estimate each quantity of interest given an arbitrary collection of received assessments. The analysis is motivated throughout by a practical context in which a large company needs to estimate costs for renovation of assets. The methodology is illustrated with a numerical example.","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"102 1","pages":"301-316"},"PeriodicalIF":0.0,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75697470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-07-01DOI: 10.1111/J.2517-6161.1996.TB02086.X
C. Farrington
SUMMARY Approximations to the first three moments of Pearson's statistic are obtained for noncanonical generalized linear models, extending the results of McCullagh. A first-order modification to Pearson's statistic is proposed which induces local orthogonality with the regression parameters, resulting in substantial simplifications and increased power. Accurate and easily computed approximations to the moments of the modified Pearson statistic conditional on the estimated regression parameters are obtained for testing goodness of fit to sparse data. Both the Pearson statistic and its modification are shown to be asymptotically independent of the regression parameters. Simulation studies and examples are given.
{"title":"On Assessing goodness of fit of generalized linear models to sparse data","authors":"C. Farrington","doi":"10.1111/J.2517-6161.1996.TB02086.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1996.TB02086.X","url":null,"abstract":"SUMMARY Approximations to the first three moments of Pearson's statistic are obtained for noncanonical generalized linear models, extending the results of McCullagh. A first-order modification to Pearson's statistic is proposed which induces local orthogonality with the regression parameters, resulting in substantial simplifications and increased power. Accurate and easily computed approximations to the moments of the modified Pearson statistic conditional on the estimated regression parameters are obtained for testing goodness of fit to sparse data. Both the Pearson statistic and its modification are shown to be asymptotically independent of the regression parameters. Simulation studies and examples are given.","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"8 1","pages":"349-360"},"PeriodicalIF":0.0,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75311952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-07-01DOI: 10.1111/J.2517-6161.1996.TB02084.X
C. Heyde, R. Morton
This paper is concerned with situations in which there are missing or otherwise incomplete data and the full likelihood may not be available. Extensions of the EM algorithm are developed to deal with estimation via general estimating functions and in particular the quasi-score. The E-step is replaced by projecting the quasi-score and the M-step requires the solution of an estimating equation. The standard EM algorithm can be obtained as a particular case if the likelihood is available.
{"title":"Quasi‐Likelihood and Generalizing the Em Algorithm","authors":"C. Heyde, R. Morton","doi":"10.1111/J.2517-6161.1996.TB02084.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1996.TB02084.X","url":null,"abstract":"This paper is concerned with situations in which there are missing or otherwise incomplete data and the full likelihood may not be available. Extensions of the EM algorithm are developed to deal with estimation via general estimating functions and in particular the quasi-score. The E-step is replaced by projecting the quasi-score and the M-step requires the solution of an estimating equation. The standard EM algorithm can be obtained as a particular case if the likelihood is available.","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"1 1","pages":"317-327"},"PeriodicalIF":0.0,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82992174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-09-01DOI: 10.1111/J.2517-6161.1993.TB01947.X
Lawrence G. Tatum, C. Hurvich
SUMMARY A robust form of the discrete Fourier transform is developed that can handle large amounts of contamination and patchy outliers. We use robust regression to fit a sine and cosine coefficient at each Fourier frequency, and these coefficients are then inverse Fourier transformed to give a filtered version of the data. The filtered series can then be analysed with conventional methods. The limiting breakdown bound of the filter is 500/o. Other properties of the filter are also given. The performance of our method is compared by a Monte Carlo study with that of a data cleaner of Martin and Thomson. A comparison of the methods, including an outlier detection procedure, is also done by using a real data set with patchy outliers.
{"title":"High Breakdown Methods of Time Series Analysis","authors":"Lawrence G. Tatum, C. Hurvich","doi":"10.1111/J.2517-6161.1993.TB01947.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01947.X","url":null,"abstract":"SUMMARY A robust form of the discrete Fourier transform is developed that can handle large amounts of contamination and patchy outliers. We use robust regression to fit a sine and cosine coefficient at each Fourier frequency, and these coefficients are then inverse Fourier transformed to give a filtered version of the data. The filtered series can then be analysed with conventional methods. The limiting breakdown bound of the filter is 500/o. Other properties of the filter are also given. The performance of our method is compared by a Monte Carlo study with that of a data cleaner of Martin and Thomson. A comparison of the methods, including an outlier detection procedure, is also done by using a real data set with patchy outliers.","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"13 1","pages":"881-896"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72893473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-09-01DOI: 10.1111/J.2517-6161.1993.TB01470.X
G. Grunwald, A. Raftery, P. Guttorp
SUMMARY A vector of continuous proportions consists of the proportions of some total accounted for by its constituent components. An example is the proportions of world motor vehicle production by Japan, the USA and all other countries. We consider the situation where time series data are available and where interest focuses on the proportions rather than the actual amounts. Reasons for analysing such times series include estimation of the underlying trend, estimation of the effect of covariates and interventions, and forecasting. We develop a state space model for time series of continuous proportions. Conditionally on the unobserved state, the observations are assumed to follow the Dirichlet distribution, often considered to be the most natural distribution on the simplex. The state follows the Dirichlet conjugate distribution which is introduced here. Thus the model, although based on the Dirichlet distribution, does not have its restrictive independence properties. Covariates, trends, seasonality and interventions may be incorporated in a natural way. The model has worked well when applied to several examples, and we illustrate with components of world motor vehicle production.
{"title":"Time Series of Continuous Proportions","authors":"G. Grunwald, A. Raftery, P. Guttorp","doi":"10.1111/J.2517-6161.1993.TB01470.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01470.X","url":null,"abstract":"SUMMARY A vector of continuous proportions consists of the proportions of some total accounted for by its constituent components. An example is the proportions of world motor vehicle production by Japan, the USA and all other countries. We consider the situation where time series data are available and where interest focuses on the proportions rather than the actual amounts. Reasons for analysing such times series include estimation of the underlying trend, estimation of the effect of covariates and interventions, and forecasting. We develop a state space model for time series of continuous proportions. Conditionally on the unobserved state, the observations are assumed to follow the Dirichlet distribution, often considered to be the most natural distribution on the simplex. The state follows the Dirichlet conjugate distribution which is introduced here. Thus the model, although based on the Dirichlet distribution, does not have its restrictive independence properties. Covariates, trends, seasonality and interventions may be incorporated in a natural way. The model has worked well when applied to several examples, and we illustrate with components of world motor vehicle production.","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"30 1","pages":"103-116"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83420175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-09-01DOI: 10.1111/J.2517-6161.1993.TB01939.X
T. Hastie, R. Tibshirani
We explore a class of regression and generalized regression models in which the coefficients are allowed to vary as smooth functions of other variables. General algorithms are presented for estimating the models flexibly and some examples are given. This class of models ties together generalized additive models and dynamic generalized linear models into one common framework. When applied to the proportional hazards model for survival data, this approach provides a new way of modelling departures from the proportional hazards assumption
{"title":"Varying‐Coefficient Models","authors":"T. Hastie, R. Tibshirani","doi":"10.1111/J.2517-6161.1993.TB01939.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01939.X","url":null,"abstract":"We explore a class of regression and generalized regression models in which the coefficients are allowed to vary as smooth functions of other variables. General algorithms are presented for estimating the models flexibly and some examples are given. This class of models ties together generalized additive models and dynamic generalized linear models into one common framework. When applied to the proportional hazards model for survival data, this approach provides a new way of modelling departures from the proportional hazards assumption","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"49 1","pages":"757-779"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73914818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-09-01DOI: 10.1111/J.2517-6161.1993.TB01942.X
D. Titterington
Copas's model for contamination in binary regression structures is translated into the sampling paradigm approach that is familiar in discriminant analysis. A parallel treatment of the two models is afforded by a common expression of the log-likelihood. Resistant parameter estimation is discussed for prevalence rates and for parameters, including discriminant functions, within component distributions. The methods are illustrated on a breast cancer example
{"title":"A contamination model and resistant estimation within the sampling paradigm","authors":"D. Titterington","doi":"10.1111/J.2517-6161.1993.TB01942.X","DOIUrl":"https://doi.org/10.1111/J.2517-6161.1993.TB01942.X","url":null,"abstract":"Copas's model for contamination in binary regression structures is translated into the sampling paradigm approach that is familiar in discriminant analysis. A parallel treatment of the two models is afforded by a common expression of the log-likelihood. Resistant parameter estimation is discussed for prevalence rates and for parameters, including discriminant functions, within component distributions. The methods are illustrated on a breast cancer example","PeriodicalId":17425,"journal":{"name":"Journal of the royal statistical society series b-methodological","volume":"66 1","pages":"817-827"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80216477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}