Pub Date : 2002-10-01DOI: 10.1191/1471082x02st032oa
Fas Moura
This article presents a logistic hierarchical model approach for small area prediction of proportions, taking into account both possible spatial and unstructured heterogeneity effects. The posterior distributions of the proportion predictors are obtained via Markov Chain Monte Carlo methods. This automatically takes into account the extra uncertainty associated with the hyperparameters. The procedures are applied to a real data set and comparisons are made under several settings, including a quite general logistic hierarchical model with spatial structure plus unstructured heterogeneity for small area effects. A model selection criterion based on the Expected Prediction Deviance is proposed. Its utility for selecting among competitive models in the small area prediction context is examined.
{"title":"Bayesian spatial models for small area estimation of proportions","authors":"Fas Moura","doi":"10.1191/1471082x02st032oa","DOIUrl":"https://doi.org/10.1191/1471082x02st032oa","url":null,"abstract":"This article presents a logistic hierarchical model approach for small area prediction of proportions, taking into account both possible spatial and unstructured heterogeneity effects. The posterior distributions of the proportion predictors are obtained via Markov Chain Monte Carlo methods. This automatically takes into account the extra uncertainty associated with the hyperparameters. The procedures are applied to a real data set and comparisons are made under several settings, including a quite general logistic hierarchical model with spatial structure plus unstructured heterogeneity for small area effects. A model selection criterion based on the Expected Prediction Deviance is proposed. Its utility for selecting among competitive models in the small area prediction context is examined.","PeriodicalId":354759,"journal":{"name":"Statistical Modeling","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122524343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-01DOI: 10.1191/1471082x02st023oa
M. Alfò, P. Postiglione
In the past decade various attempts have been made to extend standard random effects models to the analysis of spatial observations. This extension is a source of theoretical difficulty due to the multidirectional dependence among nearest observations; much of the previous work was based on parametric assumptions about the random effects distribution. To avoid any restriction, we propose a conditional model for spatial binary responses, without assuming a parametric distribution for the random effects. The model parameters are estimated using the EM algorithm for nonparametric maximum likelihood estimation of a mixing distribution. To illustrate the proposed approach, the model is applied to a remote sensed image of the Nebrodi Mountains (Italy).
{"title":"Semiparametric modelling of spatial binary observations","authors":"M. Alfò, P. Postiglione","doi":"10.1191/1471082x02st023oa","DOIUrl":"https://doi.org/10.1191/1471082x02st023oa","url":null,"abstract":"In the past decade various attempts have been made to extend standard random effects models to the analysis of spatial observations. This extension is a source of theoretical difficulty due to the multidirectional dependence among nearest observations; much of the previous work was based on parametric assumptions about the random effects distribution. To avoid any restriction, we propose a conditional model for spatial binary responses, without assuming a parametric distribution for the random effects. The model parameters are estimated using the EM algorithm for nonparametric maximum likelihood estimation of a mixing distribution. To illustrate the proposed approach, the model is applied to a remote sensed image of the Nebrodi Mountains (Italy).","PeriodicalId":354759,"journal":{"name":"Statistical Modeling","volume":"323 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133712694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-01DOI: 10.1191/1471082x02st028oa
Beatriz Mendes, A. Moretti
Understanding dependency between financial markets is crucial when measuring globally integrated exposures to risk. To this end the first step may be the investigation of the joint behaviour of their most representative indexes. We fit by parametric and nonparametric methods bivariate extreme value models on the component wise maxima and minima computed monthly from several pairs of indexes representing the North American, Latin American, and Emerging markets. We analyse the role of the asymmetric models, finding which market drives the dependency, and express the degrees of dependence using measures of linear and nonlinear dependency such as the linear correlation coefficient ρ and the measure τ based on the dependence function. We discuss the interpretation of τ as a conditional probability that a crash occurs in a market given that a catastrophic event has occurred in some other market. We assess risks by computing probabilities associated with joint extreme events and by computing joint risk measures. We show empirically that the joint Value-at-Risk may be severely under-estimated if independence is assumed between markets. To take into account the clustering of extreme events we compute the bivariate extremal index and incorporate this information in the analysis.
{"title":"Improving financial risk assessment through dependency","authors":"Beatriz Mendes, A. Moretti","doi":"10.1191/1471082x02st028oa","DOIUrl":"https://doi.org/10.1191/1471082x02st028oa","url":null,"abstract":"Understanding dependency between financial markets is crucial when measuring globally integrated exposures to risk. To this end the first step may be the investigation of the joint behaviour of their most representative indexes. We fit by parametric and nonparametric methods bivariate extreme value models on the component wise maxima and minima computed monthly from several pairs of indexes representing the North American, Latin American, and Emerging markets. We analyse the role of the asymmetric models, finding which market drives the dependency, and express the degrees of dependence using measures of linear and nonlinear dependency such as the linear correlation coefficient ρ and the measure τ based on the dependence function. We discuss the interpretation of τ as a conditional probability that a crash occurs in a market given that a catastrophic event has occurred in some other market. We assess risks by computing probabilities associated with joint extreme events and by computing joint risk measures. We show empirically that the joint Value-at-Risk may be severely under-estimated if independence is assumed between markets. To take into account the clustering of extreme events we compute the bivariate extremal index and incorporate this information in the analysis.","PeriodicalId":354759,"journal":{"name":"Statistical Modeling","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124824301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-01DOI: 10.1191/1471082x02st030oa
A. Wienke, K. Christensen, A. Skytthe, A. Yashin
A mixture model in multivariate survival analysis is presented, whereby heterogeneity among subjects creates divergent paths for the individual’s risk of experiencing an event (i.e., disease), as well as for the associated length of survival. Dependence among competing risks is included and rendered testable. This method is an extension of the bivariate correlated gamma-frailty model. It is applied to a data set on Danish twins, for whom cause-specific mortality is known. The use of multivariate data solves the identifiability problem which is inherent in the competing risk model of univariate lifetimes. We analyse the influence of genetic and environmental factors on frailty. Using a sample of 1470 monozygotic (MZ) and 2730 dizygotic (DZ) female twin pairs, we apply five genetic models to the associated mortality data, focusing particularly on death from coronary heart disease (CHD). Using the best fitting model, the inheritance risk of death from CHD was 0.39 (standard error 0.13). The results from this model are compared with the results from earlier analysis that used the restricted model, where the independence of competing risks was assumed. Comparing both cases, it turns out, that heritability of frailty on mortality due to CHD change substantially. Despite the inclusion of dependence, analysis confirms the significant genetic component to an individual’s risk of mortality from CHD. Whether dependence or independence is assumed, the best model for analysis with regard to CHD mortality risks is a model assuming that additive factors are responsible for heritability in susceptibility to CHD. The paper ends with a discussion of limitations and possible further extensions to the model presented.
{"title":"Genetic analysis of cause of death in a mixture model of bivariate lifetime data","authors":"A. Wienke, K. Christensen, A. Skytthe, A. Yashin","doi":"10.1191/1471082x02st030oa","DOIUrl":"https://doi.org/10.1191/1471082x02st030oa","url":null,"abstract":"A mixture model in multivariate survival analysis is presented, whereby heterogeneity among subjects creates divergent paths for the individual’s risk of experiencing an event (i.e., disease), as well as for the associated length of survival. Dependence among competing risks is included and rendered testable. This method is an extension of the bivariate correlated gamma-frailty model. It is applied to a data set on Danish twins, for whom cause-specific mortality is known. The use of multivariate data solves the identifiability problem which is inherent in the competing risk model of univariate lifetimes. We analyse the influence of genetic and environmental factors on frailty. Using a sample of 1470 monozygotic (MZ) and 2730 dizygotic (DZ) female twin pairs, we apply five genetic models to the associated mortality data, focusing particularly on death from coronary heart disease (CHD). Using the best fitting model, the inheritance risk of death from CHD was 0.39 (standard error 0.13). The results from this model are compared with the results from earlier analysis that used the restricted model, where the independence of competing risks was assumed. Comparing both cases, it turns out, that heritability of frailty on mortality due to CHD change substantially. Despite the inclusion of dependence, analysis confirms the significant genetic component to an individual’s risk of mortality from CHD. Whether dependence or independence is assumed, the best model for analysis with regard to CHD mortality risks is a model assuming that additive factors are responsible for heritability in susceptibility to CHD. The paper ends with a discussion of limitations and possible further extensions to the model presented.","PeriodicalId":354759,"journal":{"name":"Statistical Modeling","volume":"2975 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127456412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-07-01DOI: 10.1191/1471082x02st029oa
G. Storvik, A. Frigessi, D. Hirst
We compare two different modelling strategies for continuous space discrete time data. The first strategy is in the spirit of Gaussian kriging. The model is a general stationary space-time Gaussian field where the key point is the choice of a parametric form for the covariance function. In the main, covariance functions that are used are separable in space and time. Nonseparable covariance functions are useful in many applications, but construction of these is not easy. The second strategy is to model the time evolution of the process more directly. We consider models of the autoregressive type where the process at time t is obtained by convolving the process at time t − 1 and adding spatially correlated noise. Under specific conditions, the two strategies describe two different formulations of the same stochastic process. We show how the two representations look in different cases. Furthermore, by transforming time-dynamic convolution models to Gaussian fields we can obtain new covariance functions and by writing a Gaussian field as a time-dynamic convolution model, interesting properties are discovered. The computational aspects of the two strategies are discussed through experiments on a dataset of daily UK temperatures. Although algorithms for performing estimation, simulation, and so on are easy to do for the first strategy, more computer-efficient algorithms based on the second strategy can be constructed.
{"title":"Stationary space-time Gaussian fields and their time autoregressive representation","authors":"G. Storvik, A. Frigessi, D. Hirst","doi":"10.1191/1471082x02st029oa","DOIUrl":"https://doi.org/10.1191/1471082x02st029oa","url":null,"abstract":"We compare two different modelling strategies for continuous space discrete time data. The first strategy is in the spirit of Gaussian kriging. The model is a general stationary space-time Gaussian field where the key point is the choice of a parametric form for the covariance function. In the main, covariance functions that are used are separable in space and time. Nonseparable covariance functions are useful in many applications, but construction of these is not easy. The second strategy is to model the time evolution of the process more directly. We consider models of the autoregressive type where the process at time t is obtained by convolving the process at time t − 1 and adding spatially correlated noise. Under specific conditions, the two strategies describe two different formulations of the same stochastic process. We show how the two representations look in different cases. Furthermore, by transforming time-dynamic convolution models to Gaussian fields we can obtain new covariance functions and by writing a Gaussian field as a time-dynamic convolution model, interesting properties are discovered. The computational aspects of the two strategies are discussed through experiments on a dataset of daily UK temperatures. Although algorithms for performing estimation, simulation, and so on are easy to do for the first strategy, more computer-efficient algorithms based on the second strategy can be constructed.","PeriodicalId":354759,"journal":{"name":"Statistical Modeling","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125513153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-01DOI: 10.1191/1471082x02st024oa
J. Horowitz, S. Lee
Much empirical research in economics and other fields is concerned with estimating the mean of a random variable conditional on one or more explanatory variables (conditional mean function). The most frequently used estimation methods assume that the conditional mean function is known up to a finite number of parameters, but the resulting estimates can be highly misleading if the assumed parametric model is incorrect. This paper reviews several semiparametric methods for estimating conditional mean functions. These methods are more flexible than parametric methods and offer greater estimation precision than do fully nonparametric methods. The various estimation methods are illustrated by applying them to data on the salaries of professional baseball players in the USA. We find that a parametric model and several simple semiparametric models fail to capture important features of the data. However, a sufficiently rich semiparametric model fits the data well. We conclude that semiparametric models can achieve their aim of providing flexible representations of conditional mean functions, but care is needed in choosing the semiparametric specification. Our analysis also provides some suggestions for further research on semiparametric estimation.
{"title":"Semiparametric methods in applied econometrics: do the models fit the data?","authors":"J. Horowitz, S. Lee","doi":"10.1191/1471082x02st024oa","DOIUrl":"https://doi.org/10.1191/1471082x02st024oa","url":null,"abstract":"Much empirical research in economics and other fields is concerned with estimating the mean of a random variable conditional on one or more explanatory variables (conditional mean function). The most frequently used estimation methods assume that the conditional mean function is known up to a finite number of parameters, but the resulting estimates can be highly misleading if the assumed parametric model is incorrect. This paper reviews several semiparametric methods for estimating conditional mean functions. These methods are more flexible than parametric methods and offer greater estimation precision than do fully nonparametric methods. The various estimation methods are illustrated by applying them to data on the salaries of professional baseball players in the USA. We find that a parametric model and several simple semiparametric models fail to capture important features of the data. However, a sufficiently rich semiparametric model fits the data well. We conclude that semiparametric models can achieve their aim of providing flexible representations of conditional mean functions, but care is needed in choosing the semiparametric specification. Our analysis also provides some suggestions for further research on semiparametric estimation.","PeriodicalId":354759,"journal":{"name":"Statistical Modeling","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123216892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-01DOI: 10.1191/1471082x02st025oa
J. Palmgren, S. Ripatti
The generalized linear model (McCullagh and Nelder, 1972) and the semiparametric multiplicative hazard model (Cox, 1972) have significantly influenced the way in which statistical modelling is taught and practiced. Common for the two model families is the assumption that conditionally on covariate information (including time) the observations are independent. The obvious difficulty in identifying and measuring all relevant covariates has pushed for methods that can jointly handle both mean and dependence structures. The early 1990s saw a myriad of approaches for dealing with multivariate generalized linear models. More recently, the hazard models have been extended to multivariate settings. Here we review (i) penalized likelihood, (ii) Monte Carlo EM, and (iii) Bayesian Markov chain Monte Carlo methods for fitting the generalized linear mixed models and the frailty models, and we discuss the rationale for choosing between the methods. The similarities of the toolboxes for these two multivariate model families open up for a new level of generality both in teaching and applied research. Two examples are used for illustration, involving censored failure time responses and Poisson responses, respectively.
广义线性模型(McCullagh and Nelder, 1972)和半参数乘法风险模型(Cox, 1972)对统计建模的教学和实践方式产生了重大影响。这两种模型族的共同点是假设观测值在协变量信息(包括时间)的条件下是独立的。识别和测量所有相关协变量的明显困难推动了可以联合处理均值和依赖结构的方法。20世纪90年代初出现了无数处理多元广义线性模型的方法。最近,风险模型已扩展到多变量设置。在这里,我们回顾了(i)惩罚似然,(ii)蒙特卡罗EM和(iii)贝叶斯马尔可夫链蒙特卡罗方法拟合广义线性混合模型和脆弱性模型,并讨论了在方法之间进行选择的基本原理。这两个多元模型族工具箱的相似性为教学和应用研究的通用性开辟了一个新的水平。用两个例子来说明,分别涉及截尾失效时间响应和泊松响应。
{"title":"Fitting exponential family mixed models","authors":"J. Palmgren, S. Ripatti","doi":"10.1191/1471082x02st025oa","DOIUrl":"https://doi.org/10.1191/1471082x02st025oa","url":null,"abstract":"The generalized linear model (McCullagh and Nelder, 1972) and the semiparametric multiplicative hazard model (Cox, 1972) have significantly influenced the way in which statistical modelling is taught and practiced. Common for the two model families is the assumption that conditionally on covariate information (including time) the observations are independent. The obvious difficulty in identifying and measuring all relevant covariates has pushed for methods that can jointly handle both mean and dependence structures. The early 1990s saw a myriad of approaches for dealing with multivariate generalized linear models. More recently, the hazard models have been extended to multivariate settings. Here we review (i) penalized likelihood, (ii) Monte Carlo EM, and (iii) Bayesian Markov chain Monte Carlo methods for fitting the generalized linear mixed models and the frailty models, and we discuss the rationale for choosing between the methods. The similarities of the toolboxes for these two multivariate model families open up for a new level of generality both in teaching and applied research. Two examples are used for illustration, involving censored failure time responses and Poisson responses, respectively.","PeriodicalId":354759,"journal":{"name":"Statistical Modeling","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133650489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-01DOI: 10.1191/1471082x02st022oa
R. Crouchley, M. Ganjali
This paper presents a multivariate generalization of the classical Heckman selection model and applies it to non-ignorable dropout in repeated continuous responses. Many of the recent models for dropout in repeated continuous responses can be written as special forms of this generalized Heckman model. To illustrate this, we present the parameterizations needed to obtain the form of dropout model that occurs when (1) the separate models for the response and dropout are linked by common random parameters, (2) the dropout model is an explicit function of the previous responses and the possibly unobserved current response, (3) the dropout model is both a function of the current response and a common random parameter, and (4) there is a covariance between the stochastic disturbances of the response and dropout processes. We present the joint likelihood of the generalized Heckman model and a residual for the responses. We contrast two of the dropout models in a simulation study. We compare the results obtained from several dropout models on the well known mastitis data.
{"title":"The common structure of several models for non-ignorable dropout","authors":"R. Crouchley, M. Ganjali","doi":"10.1191/1471082x02st022oa","DOIUrl":"https://doi.org/10.1191/1471082x02st022oa","url":null,"abstract":"This paper presents a multivariate generalization of the classical Heckman selection model and applies it to non-ignorable dropout in repeated continuous responses. Many of the recent models for dropout in repeated continuous responses can be written as special forms of this generalized Heckman model. To illustrate this, we present the parameterizations needed to obtain the form of dropout model that occurs when (1) the separate models for the response and dropout are linked by common random parameters, (2) the dropout model is an explicit function of the previous responses and the possibly unobserved current response, (3) the dropout model is both a function of the current response and a common random parameter, and (4) there is a covariance between the stochastic disturbances of the response and dropout processes. We present the joint likelihood of the generalized Heckman model and a residual for the responses. We contrast two of the dropout models in a simulation study. We compare the results obtained from several dropout models on the well known mastitis data.","PeriodicalId":354759,"journal":{"name":"Statistical Modeling","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126622537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-04-01DOI: 10.1191/1471082x02st026oa
Peter Congdon
Recent developments in health outcome models for small areas have found benefits from pooling information over areas to produce smoothed estimates of mortality and morbidity rates. Such indices serve as proxies for the need for health care and are often used in allocating health care resources. The present paper adopts a full life table approach to such outcomes, which includes the joint modelling of mortality and health variation between small areas. A further feature of the approach here is random effects modelling of age-specific death and wellness rates, so pooling strength in estimating life table parameters for areas, such as healthy and total life expectancies, which may be based on small event counts. The basic model involves exchangeable random effects for age and area. However, structured forms of variation considered include correlations between mortality and health, spatial correlation in these outcomes, and interrelatedness in age effects. A case study illustration uses deaths and long-term illness data to develop small area life tables for two London boroughs, and includes a temporal perspective on deaths. It then considers the utility of area life table measures in predicting health activity, providing a form of validation in addition to formal statistical cross-validation.
{"title":"A life table approach to small area health need profiling","authors":"Peter Congdon","doi":"10.1191/1471082x02st026oa","DOIUrl":"https://doi.org/10.1191/1471082x02st026oa","url":null,"abstract":"Recent developments in health outcome models for small areas have found benefits from pooling information over areas to produce smoothed estimates of mortality and morbidity rates. Such indices serve as proxies for the need for health care and are often used in allocating health care resources. The present paper adopts a full life table approach to such outcomes, which includes the joint modelling of mortality and health variation between small areas. A further feature of the approach here is random effects modelling of age-specific death and wellness rates, so pooling strength in estimating life table parameters for areas, such as healthy and total life expectancies, which may be based on small event counts. The basic model involves exchangeable random effects for age and area. However, structured forms of variation considered include correlations between mortality and health, spatial correlation in these outcomes, and interrelatedness in age effects. A case study illustration uses deaths and long-term illness data to develop small area life tables for two London boroughs, and includes a temporal perspective on deaths. It then considers the utility of area life table measures in predicting health activity, providing a form of validation in addition to formal statistical cross-validation.","PeriodicalId":354759,"journal":{"name":"Statistical Modeling","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123379992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-12-01DOI: 10.1177/1471082X0100100404
M. Aitkin
This paper compares likelihood and Bayesian analyses of finite mixture distributions, and expresses reservations about the latter. In particular, the role of prior assumptions in the full Monte Carlo Markov chain Bayes analysis is obscure, yet these assumptions clearly play a major role in the conclusions. These issues are illustrated with a detailed discussion of the well-known galaxy data.
{"title":"Likelihood and Bayesian analysis of mixtures","authors":"M. Aitkin","doi":"10.1177/1471082X0100100404","DOIUrl":"https://doi.org/10.1177/1471082X0100100404","url":null,"abstract":"This paper compares likelihood and Bayesian analyses of finite mixture distributions, and expresses reservations about the latter. In particular, the role of prior assumptions in the full Monte Carlo Markov chain Bayes analysis is obscure, yet these assumptions clearly play a major role in the conclusions. These issues are illustrated with a detailed discussion of the well-known galaxy data.","PeriodicalId":354759,"journal":{"name":"Statistical Modeling","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121825349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}