Pub Date : 2018-01-01DOI: 10.1080/24709360.2017.1400714
Sifan Liu, L. Tian, Steve Lee, Min‐ge Xie
ABSTRACT Meta-analysis with fixed-effects and random-effects models provides a general framework for quantitatively summarizing multiple comparative studies. However, a majority of the conventional methods rely on large-sample approximations to justify their inference, which may be invalid and lead to erroneous conclusions, especially when the number of studies is not large, or sample sizes of the individual studies are small. In this article, we propose a set of ‘exact’ confidence intervals for the overall effect, where the coverage probabilities of the intervals can always be achieved. We start with conventional parametric fixed-effects and random-effects models, and then extend the exact methods beyond the commonly postulated Gaussian assumptions. Efficient numerical algorithms for implementing the proposed methods are developed. We also conduct simulation studies to compare the performance of our proposal to existing methods, indicating our proposed procedures are better in terms of coverage level and robustness. The new proposals are then illustrated with the data from meta-analyses for estimating the efficacy of statins and BCG vaccination.
{"title":"Exact inference on meta-analysis with generalized fixed-effects and random-effects models","authors":"Sifan Liu, L. Tian, Steve Lee, Min‐ge Xie","doi":"10.1080/24709360.2017.1400714","DOIUrl":"https://doi.org/10.1080/24709360.2017.1400714","url":null,"abstract":"ABSTRACT Meta-analysis with fixed-effects and random-effects models provides a general framework for quantitatively summarizing multiple comparative studies. However, a majority of the conventional methods rely on large-sample approximations to justify their inference, which may be invalid and lead to erroneous conclusions, especially when the number of studies is not large, or sample sizes of the individual studies are small. In this article, we propose a set of ‘exact’ confidence intervals for the overall effect, where the coverage probabilities of the intervals can always be achieved. We start with conventional parametric fixed-effects and random-effects models, and then extend the exact methods beyond the commonly postulated Gaussian assumptions. Efficient numerical algorithms for implementing the proposed methods are developed. We also conduct simulation studies to compare the performance of our proposal to existing methods, indicating our proposed procedures are better in terms of coverage level and robustness. The new proposals are then illustrated with the data from meta-analyses for estimating the efficacy of statins and BCG vaccination.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"2 1","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1400714","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46269654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01DOI: 10.1080/24709360.2017.1406040
V. Kulothungan, M. Subbiah, R. Ramakrishnan, R. Raman
ABSTRACT The realm of medical statistics or epidemiology encourages the repeated application of few variants of generalized linear model. This work has identified a situation in understanding the risk factor modelling for diabetic retinopathy, major source for blindness in adults and associated with Type II diabetes. Main objective of this study is to retain the ordinal nature of the response variable, one of the main concerns in ordinal regression procedures; and to emphasize the need for applying stereotype regression for bio medical data. Analysis plan envisaged in this study has shown the relevance and scope to extend the use of ordinal regression models.
{"title":"Identifying associated risk factors for severity of diabetic retinopathy from ordinal logistic regression models","authors":"V. Kulothungan, M. Subbiah, R. Ramakrishnan, R. Raman","doi":"10.1080/24709360.2017.1406040","DOIUrl":"https://doi.org/10.1080/24709360.2017.1406040","url":null,"abstract":"ABSTRACT The realm of medical statistics or epidemiology encourages the repeated application of few variants of generalized linear model. This work has identified a situation in understanding the risk factor modelling for diabetic retinopathy, major source for blindness in adults and associated with Type II diabetes. Main objective of this study is to retain the ordinal nature of the response variable, one of the main concerns in ordinal regression procedures; and to emphasize the need for applying stereotype regression for bio medical data. Analysis plan envisaged in this study has shown the relevance and scope to extend the use of ordinal regression models.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"78 1","pages":"34 - 46"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1406040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41272256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01DOI: 10.1080/24709360.2018.1435608
Kehao Zhu, Ying Huang, Xiao‐Hua Zhou
ABSTRACT There is a growing interest in the development of statistical methods for personalized medicine or precision medicine, especially for deriving optimal individualized treatment rules (ITRs). An ITR recommends a patient to a treatment based on the patient's characteristics. The common parametric methods for deriving an optimal ITR, which model the clinical endpoint as a function of the patient's characteristics, can have suboptimal performance when the conditional mean model is misspecified. Recent methodology development has cast the problem of deriving optimal ITR under a weighted classification framework. Under this weighted classification framework, we develop a weighted random forests (W-RF) algorithm that derives an optimal ITR nonparametrically. In addition, with the W-RF algorithm, we propose the variable importance measures for quantifying relative relevance of the patient's characteristics to treatment selection, and the out-of-bag estimator for the population average outcome under the estimated optimal ITR. Our proposed methods are evaluated through intensive simulation studies. We illustrate the application of our methods using data from Clinical Antipsychotic Trials of Intervention Effectiveness Alzheimer's Disease Study.
{"title":"Tree-based ensemble methods for individualized treatment rules","authors":"Kehao Zhu, Ying Huang, Xiao‐Hua Zhou","doi":"10.1080/24709360.2018.1435608","DOIUrl":"https://doi.org/10.1080/24709360.2018.1435608","url":null,"abstract":"ABSTRACT There is a growing interest in the development of statistical methods for personalized medicine or precision medicine, especially for deriving optimal individualized treatment rules (ITRs). An ITR recommends a patient to a treatment based on the patient's characteristics. The common parametric methods for deriving an optimal ITR, which model the clinical endpoint as a function of the patient's characteristics, can have suboptimal performance when the conditional mean model is misspecified. Recent methodology development has cast the problem of deriving optimal ITR under a weighted classification framework. Under this weighted classification framework, we develop a weighted random forests (W-RF) algorithm that derives an optimal ITR nonparametrically. In addition, with the W-RF algorithm, we propose the variable importance measures for quantifying relative relevance of the patient's characteristics to treatment selection, and the out-of-bag estimator for the population average outcome under the estimated optimal ITR. Our proposed methods are evaluated through intensive simulation studies. We illustrate the application of our methods using data from Clinical Antipsychotic Trials of Intervention Effectiveness Alzheimer's Disease Study.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"2 1","pages":"61 - 83"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2018.1435608","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46111890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/24709360.2017.1342185
Mei-Cheng Wang, Yifei Sun
ABSTRACT Medical care costs are commonly used by health policy-makers and decision-maker for evaluating health care service and decision on treatment plans. This type of data is commonly recorded in surveillance systems when inpatient or outpatient care service is provided. In this paper, we formulate medical cost data as a recurrent marker process, which is composed of recurrent events (inpatient or outpatient cares) and repeatedly measured marker measurements (medical charges). We consider nonparametric estimation of the quantiles of cost distribution among survivors in the absence or presence of competing terminal events. Statistical methods are developed for quantile estimation of the cost distribution for the purposes of evaluating cost performance in relation to recurrent events, marker measurements and time to the terminal event for different competing risk groups. The proposed approaches are illustrated by an analysis of data from the Surveillance, Epidemiology, and End Results (SEER) and Medicare linked database.
{"title":"Nonparametric estimation of medical cost quantiles in the presence of competing terminal events","authors":"Mei-Cheng Wang, Yifei Sun","doi":"10.1080/24709360.2017.1342185","DOIUrl":"https://doi.org/10.1080/24709360.2017.1342185","url":null,"abstract":"ABSTRACT Medical care costs are commonly used by health policy-makers and decision-maker for evaluating health care service and decision on treatment plans. This type of data is commonly recorded in surveillance systems when inpatient or outpatient care service is provided. In this paper, we formulate medical cost data as a recurrent marker process, which is composed of recurrent events (inpatient or outpatient cares) and repeatedly measured marker measurements (medical charges). We consider nonparametric estimation of the quantiles of cost distribution among survivors in the absence or presence of competing terminal events. Statistical methods are developed for quantile estimation of the cost distribution for the purposes of evaluating cost performance in relation to recurrent events, marker measurements and time to the terminal event for different competing risk groups. The proposed approaches are illustrated by an analysis of data from the Surveillance, Epidemiology, and End Results (SEER) and Medicare linked database.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"78 - 91"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1342185","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48789712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/24709360.2017.1342186
M. Malek-Ahmadi, E. Mufson, S. Perez, Kewei Chen
ABSTRACT Analysis of the associations between the neuropathology of Alzheimer's disease (AD) and cognition has become a major area of investigation as both basic and clinical researchers have turned their attention toward identifying the factors underlying the onset of preclinical AD. Here we provide a conceptual overview of statistical approaches for analyzing associations between cognition and AD neuropathology in the context of the prodromal AD. The review will discuss a variety of statistical approaches, their application to various clinical pathological variables, and research questions, as well as the importance of accounting for and including interaction terms in statistical models. The overview presented here will introduce data analysts and statisticians to the nomenclature of AD neuropathology and provide relevant background information regarding the nature of cognitive and neuropathological data generated in the investigation of preclinical AD. In addition, we will introduce a number of statistical approaches that researchers who specialize in AD neuropathology may utilize in their studies. For both audiences, this review will provide an applied statistical framework to draw from for future research.
{"title":"Statistical considerations for assessing cognition and neuropathology associations in preclinical Alzheimer's disease","authors":"M. Malek-Ahmadi, E. Mufson, S. Perez, Kewei Chen","doi":"10.1080/24709360.2017.1342186","DOIUrl":"https://doi.org/10.1080/24709360.2017.1342186","url":null,"abstract":"ABSTRACT Analysis of the associations between the neuropathology of Alzheimer's disease (AD) and cognition has become a major area of investigation as both basic and clinical researchers have turned their attention toward identifying the factors underlying the onset of preclinical AD. Here we provide a conceptual overview of statistical approaches for analyzing associations between cognition and AD neuropathology in the context of the prodromal AD. The review will discuss a variety of statistical approaches, their application to various clinical pathological variables, and research questions, as well as the importance of accounting for and including interaction terms in statistical models. The overview presented here will introduce data analysts and statisticians to the nomenclature of AD neuropathology and provide relevant background information regarding the nature of cognitive and neuropathological data generated in the investigation of preclinical AD. In addition, we will introduce a number of statistical approaches that researchers who specialize in AD neuropathology may utilize in their studies. For both audiences, this review will provide an applied statistical framework to draw from for future research.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"104 - 92"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1342186","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45352245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/24709360.2017.1381300
Shanshan Li, Mengjie Zheng, Sujuan Gao
ABSTRACT This paper presents a statistical method for analyzing the association between longitudinal cholesterol measurements and the timing of onset of dementia. The proposed approach jointly models the longitudinal and survival processes for each individual on the basis of a shared random effect, where a linear mixed effects model is assumed for the longitudinal component and an extended Cox regression model is employed for the survival component. A dynamic prediction model is built based on the joint model, which provides prediction of the conditional survival probabilities at different time points using available longitudinal measurements as well as baseline characteristics. We apply our method to the Indianapolis-Ibadan Dementia project, a 20-year study of dementia in elderly African Americans living in Indianapolis, Indiana. We find that with baseline covariates and comorbidities adjusted, the risk of dementia decreases by 1% per one mg/dl increase in total cholesterol. Therefore we conclude that, in a healthy cohort of African Americans aged 65 years or more, high late-life cholesterol level is associated with lower incidence of dementia.
{"title":"Joint modeling of longitudinal cholesterol measurements and time to onset of dementia in an elderly African American Cohort","authors":"Shanshan Li, Mengjie Zheng, Sujuan Gao","doi":"10.1080/24709360.2017.1381300","DOIUrl":"https://doi.org/10.1080/24709360.2017.1381300","url":null,"abstract":"ABSTRACT This paper presents a statistical method for analyzing the association between longitudinal cholesterol measurements and the timing of onset of dementia. The proposed approach jointly models the longitudinal and survival processes for each individual on the basis of a shared random effect, where a linear mixed effects model is assumed for the longitudinal component and an extended Cox regression model is employed for the survival component. A dynamic prediction model is built based on the joint model, which provides prediction of the conditional survival probabilities at different time points using available longitudinal measurements as well as baseline characteristics. We apply our method to the Indianapolis-Ibadan Dementia project, a 20-year study of dementia in elderly African Americans living in Indianapolis, Indiana. We find that with baseline covariates and comorbidities adjusted, the risk of dementia decreases by 1% per one mg/dl increase in total cholesterol. Therefore we conclude that, in a healthy cohort of African Americans aged 65 years or more, high late-life cholesterol level is associated with lower incidence of dementia.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"148 - 160"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1381300","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45026971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/24709360.2017.1358137
Ying Wei, Xinran Ma, Xinhua Liu, M. Terry
ABSTRACT For many applications, it is valuable to assess whether the effects of exposures over time vary by quantiles of the outcome. We have previously shown that quantile methods complement the traditional mean-based analyses, and are useful for studies of body size. Here, we extended previous work to time-varying quantile associations. Using data from over 18,000 children in the U.S. Collaborative Perinatal Project, we investigated the impact of maternal pre-pregnancy body mass index (BMI), maternal pregnancy weight gain, placental weight, and birth weight on childhood body size measured 4 times between 3 months and 7 years, using both parametric and non-parametric time-varying quantile regressions. Using our proposed model assessment tool, we found that non-parametric models fit the childhood growth data better than the parametric approaches. We also observed that quantile analysis resulted in difference inferences than the conditional mean models in three of the four constructs (maternal per-pregancy BMI, maternal weight gain, and placental weight). Overall, these results suggest the utility of applying time-varying quantile models for longitudinal outcome data. They also suggest that in the studies of body size, merely modelling the conditional mean may lead to incomplete summary of the data.
{"title":"Using time-varying quantile regression approaches to model the influence of prenatal and infant exposures on childhood growth","authors":"Ying Wei, Xinran Ma, Xinhua Liu, M. Terry","doi":"10.1080/24709360.2017.1358137","DOIUrl":"https://doi.org/10.1080/24709360.2017.1358137","url":null,"abstract":"ABSTRACT For many applications, it is valuable to assess whether the effects of exposures over time vary by quantiles of the outcome. We have previously shown that quantile methods complement the traditional mean-based analyses, and are useful for studies of body size. Here, we extended previous work to time-varying quantile associations. Using data from over 18,000 children in the U.S. Collaborative Perinatal Project, we investigated the impact of maternal pre-pregnancy body mass index (BMI), maternal pregnancy weight gain, placental weight, and birth weight on childhood body size measured 4 times between 3 months and 7 years, using both parametric and non-parametric time-varying quantile regressions. Using our proposed model assessment tool, we found that non-parametric models fit the childhood growth data better than the parametric approaches. We also observed that quantile analysis resulted in difference inferences than the conditional mean models in three of the four constructs (maternal per-pregancy BMI, maternal weight gain, and placental weight). Overall, these results suggest the utility of applying time-varying quantile models for longitudinal outcome data. They also suggest that in the studies of body size, merely modelling the conditional mean may lead to incomplete summary of the data.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"133 - 147"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1358137","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48816696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/24709360.2016.1198464
Xiaohong Zhou
Dear Readers, We are delighted to announce the launch of Biostatistics & Epidemiology as the official journal of the International Biometric Society Chinese region. The International Biometric Society Chinese Region was founded in 2012, with the support of the International Biometric Society, and has grown strongly since then, becoming a focus of biostatistical and epidemiological research in China and beyond. The growth of this community has reached the point where the launch of a dedicated and top-quality peerreviewed research journal is necessary and warranted. Below, we outline the mission and scope of the Journal, along with the review process. The Journal aims to provide a platform for the dissemination of new statistical methods and the promotion of good analytical practices in biomedical investigation and epidemiology. The Journal has four main sections:
{"title":"Editorial","authors":"Xiaohong Zhou","doi":"10.1080/24709360.2016.1198464","DOIUrl":"https://doi.org/10.1080/24709360.2016.1198464","url":null,"abstract":"Dear Readers, We are delighted to announce the launch of Biostatistics & Epidemiology as the official journal of the International Biometric Society Chinese region. The International Biometric Society Chinese Region was founded in 2012, with the support of the International Biometric Society, and has grown strongly since then, becoming a focus of biostatistical and epidemiological research in China and beyond. The growth of this community has reached the point where the launch of a dedicated and top-quality peerreviewed research journal is necessary and warranted. Below, we outline the mission and scope of the Journal, along with the review process. The Journal aims to provide a platform for the dissemination of new statistical methods and the promotion of good analytical practices in biomedical investigation and epidemiology. The Journal has four main sections:","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"1 - 2"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2016.1198464","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49507957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/24709360.2017.1396742
Yen-Chi Chen
ABSTRACT This tutorial provides a gentle introduction to kernel density estimation (KDE) and recent advances regarding confidence bands and geometric/topological features. We begin with a discussion of basic properties of KDE: the convergence rate under various metrics, density derivative estimation, and bandwidth selection. Then, we introduce common approaches to the construction of confidence intervals/bands, and we discuss how to handle bias. Next, we talk about recent advances in the inference of geometric and topological features of a density function using KDE. Finally, we illustrate how one can use KDE to estimate a cumulative distribution function and a receiver operating characteristic curve. We provide R implementations related to this tutorial at the end.
{"title":"A tutorial on kernel density estimation and recent advances","authors":"Yen-Chi Chen","doi":"10.1080/24709360.2017.1396742","DOIUrl":"https://doi.org/10.1080/24709360.2017.1396742","url":null,"abstract":"ABSTRACT This tutorial provides a gentle introduction to kernel density estimation (KDE) and recent advances regarding confidence bands and geometric/topological features. We begin with a discussion of basic properties of KDE: the convergence rate under various metrics, density derivative estimation, and bandwidth selection. Then, we introduce common approaches to the construction of confidence intervals/bands, and we discuss how to handle bias. Next, we talk about recent advances in the inference of geometric and topological features of a density function using KDE. Finally, we illustrate how one can use KDE to estimate a cumulative distribution function and a receiver operating characteristic curve. We provide R implementations related to this tutorial at the end.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"161 - 187"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1396742","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41909385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/24709360.2017.1359356
G. Yi, Wenqing He, Feng He
ABSTRACT Multi-state models are commonly used in studies of disease progression. Methods developed under this framework, however, are often challenged by misclassification in states. In this article, we investigate issues concerning continuous-time progressive multi-state models with state misclassification. We develop inference methods using both the likelihood and pairwise likelihood methods that are based on joint modelling of the progressive and misclassification processes. We assess the performance of the proposed methods by simulation studies, and illustrate their use by the application to the data arising from a coronary allograft vasculopathy study.
{"title":"Analysis of progressive multi-state models with misclassified states: likelihood and pairwise likelihood methods","authors":"G. Yi, Wenqing He, Feng He","doi":"10.1080/24709360.2017.1359356","DOIUrl":"https://doi.org/10.1080/24709360.2017.1359356","url":null,"abstract":"ABSTRACT Multi-state models are commonly used in studies of disease progression. Methods developed under this framework, however, are often challenged by misclassification in states. In this article, we investigate issues concerning continuous-time progressive multi-state models with state misclassification. We develop inference methods using both the likelihood and pairwise likelihood methods that are based on joint modelling of the progressive and misclassification processes. We assess the performance of the proposed methods by simulation studies, and illustrate their use by the application to the data arising from a coronary allograft vasculopathy study.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"119 - 132"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1359356","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48115978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}