Pub Date : 2024-01-01Epub Date: 2023-08-12DOI: 10.1007/s10985-023-09607-6
Jeffrey Zhang, Dylan S Small
We conduct an observational study of the effect of sickle cell trait Haemoglobin AS (HbAS) on the hazard rate of malaria fevers in children. Assuming no unmeasured confounding, there is strong evidence that HbAS reduces the rate of malarial fevers. Since this is an observational study, however, the no unmeasured confounding assumption is strong. A sensitivity analysis considers how robust a conclusion is to a potential unmeasured confounder. We propose a new sensitivity analysis method for recurrent event data and apply it to the malaria study. We find that for the causal conclusion that HbAS is protective against malarial fevers to be overturned, the hypothesized unmeasured confounder must be as influential as all but one of the measured confounders.
{"title":"Sensitivity Analysis for Observational Studies with Recurrent Events.","authors":"Jeffrey Zhang, Dylan S Small","doi":"10.1007/s10985-023-09607-6","DOIUrl":"10.1007/s10985-023-09607-6","url":null,"abstract":"<p><p>We conduct an observational study of the effect of sickle cell trait Haemoglobin AS (HbAS) on the hazard rate of malaria fevers in children. Assuming no unmeasured confounding, there is strong evidence that HbAS reduces the rate of malarial fevers. Since this is an observational study, however, the no unmeasured confounding assumption is strong. A sensitivity analysis considers how robust a conclusion is to a potential unmeasured confounder. We propose a new sensitivity analysis method for recurrent event data and apply it to the malaria study. We find that for the causal conclusion that HbAS is protective against malarial fevers to be overturned, the hypothesized unmeasured confounder must be as influential as all but one of the measured confounders.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"237-261"},"PeriodicalIF":1.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10353741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2023-06-04DOI: 10.1007/s10985-023-09601-y
Marie Skov Breum, Anders Munch, Thomas A Gerds, Torben Martinussen
In this article we study the effect of a baseline exposure on a terminal time-to-event outcome either directly or mediated by the illness state of a continuous-time illness-death process with baseline covariates. We propose a definition of the corresponding direct and indirect effects using the concept of separable (interventionist) effects (Robins and Richardson in Causality and psychopathology: finding the determinants of disorders and their cures, Oxford University Press, 2011; Robins et al. in arXiv:2008.06019 , 2021; Stensrud et al. in J Am Stat Assoc 117:175-183, 2022). Our proposal generalizes Martinussen and Stensrud (Biometrics 79:127-139, 2023) who consider similar causal estimands for disentangling the causal treatment effects on the event of interest and competing events in the standard continuous-time competing risk model. Unlike natural direct and indirect effects (Robins and Greenland in Epidemiology 3:143-155, 1992; Pearl in Proceedings of the seventeenth conference on uncertainty in artificial intelligence, Morgan Kaufmann, 2001) which are usually defined through manipulations of the mediator independently of the exposure (so-called cross-world interventions), separable direct and indirect effects are defined through interventions on different components of the exposure that exert their effects through distinct causal mechanisms. This approach allows us to define meaningful mediation targets even though the mediating event is truncated by the terminal event. We present the conditions for identifiability, which include some arguably restrictive structural assumptions on the treatment mechanism, and discuss when such assumptions are valid. The identifying functionals are used to construct plug-in estimators for the separable direct and indirect effects. We also present multiply robust and asymptotically efficient estimators based on the efficient influence functions. We verify the theoretical properties of the estimators in a simulation study, and we demonstrate the use of the estimators using data from a Danish registry study.
在这篇文章中,我们研究了基线暴露对最终时间到事件结果的影响,这种影响可以是直接影响,也可以是由带有基线协变量的连续时间疾病-死亡过程中的疾病状态所介导的影响。我们利用可分离(干预)效应的概念,提出了相应的直接效应和间接效应的定义(罗宾斯和理查德森在《因果关系与精神病理学:寻找失调症及其治疗的决定因素》(Causality and psychopathology: Find the determinants of disorders and their cures)一书中提出,牛津大学出版社,2011 年;罗宾斯等人在 arXiv:2008.06019 中提出,2021 年;斯坦斯鲁德等人在《美国统计协会杂志》(J Am Stat Assoc 117:175-183, 2022 年)中提出)。我们的建议概括了 Martinussen 和 Stensrud(Biometrics 79:127-139,2023 年)的观点,他们考虑了类似的因果关系估计值,以在标准连续时间竞争风险模型中分离对相关事件和竞争事件的因果处理效应。自然直接效应和间接效应(Robins 和 Greenland,发表于《流行病学》3:143-155,1992 年;Pearl,发表于《第十七届人工智能不确定性会议论文集》,Morgan Kaufmann,2001 年)通常是通过独立于暴露的中介操作(所谓的跨世界干预)来定义的,而可分离的直接效应和间接效应则是通过对暴露的不同成分进行干预来定义的,这些成分通过不同的因果机制来产生效应。这种方法允许我们定义有意义的中介目标,即使中介事件被终端事件截断。我们提出了可识别性的条件,其中包括对治疗机制的一些可以说是限制性的结构假设,并讨论了这些假设何时有效。识别函数用于构建可分离的直接效应和间接效应的插件估计器。我们还提出了基于有效影响函数的多稳健渐进有效估计器。我们在模拟研究中验证了估计器的理论特性,并使用丹麦登记研究的数据演示了估计器的使用。
{"title":"Estimation of separable direct and indirect effects in a continuous-time illness-death model.","authors":"Marie Skov Breum, Anders Munch, Thomas A Gerds, Torben Martinussen","doi":"10.1007/s10985-023-09601-y","DOIUrl":"10.1007/s10985-023-09601-y","url":null,"abstract":"<p><p>In this article we study the effect of a baseline exposure on a terminal time-to-event outcome either directly or mediated by the illness state of a continuous-time illness-death process with baseline covariates. We propose a definition of the corresponding direct and indirect effects using the concept of separable (interventionist) effects (Robins and Richardson in Causality and psychopathology: finding the determinants of disorders and their cures, Oxford University Press, 2011; Robins et al. in arXiv:2008.06019 , 2021; Stensrud et al. in J Am Stat Assoc 117:175-183, 2022). Our proposal generalizes Martinussen and Stensrud (Biometrics 79:127-139, 2023) who consider similar causal estimands for disentangling the causal treatment effects on the event of interest and competing events in the standard continuous-time competing risk model. Unlike natural direct and indirect effects (Robins and Greenland in Epidemiology 3:143-155, 1992; Pearl in Proceedings of the seventeenth conference on uncertainty in artificial intelligence, Morgan Kaufmann, 2001) which are usually defined through manipulations of the mediator independently of the exposure (so-called cross-world interventions), separable direct and indirect effects are defined through interventions on different components of the exposure that exert their effects through distinct causal mechanisms. This approach allows us to define meaningful mediation targets even though the mediating event is truncated by the terminal event. We present the conditions for identifiability, which include some arguably restrictive structural assumptions on the treatment mechanism, and discuss when such assumptions are valid. The identifying functionals are used to construct plug-in estimators for the separable direct and indirect effects. We also present multiply robust and asymptotically efficient estimators based on the efficient influence functions. We verify the theoretical properties of the estimators in a simulation study, and we demonstrate the use of the estimators using data from a Danish registry study.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"143-180"},"PeriodicalIF":1.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10764601/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9924229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2023-11-17DOI: 10.1007/s10985-023-09610-x
Olivier Bouaziz
In a recurrent event setting, we introduce a new score designed to evaluate the prediction ability, for a given model, of the expected cumulative number of recurrent events. This score can be seen as an extension of the Brier Score for single time to event data but works for recurrent events with or without a terminal event. Theoretical results are provided that show that under standard assumptions in a recurrent event context, our score can be asymptotically decomposed as the sum of the theoretical mean squared error between the model and the true expected cumulative number of recurrent events and an inseparability term that does not depend on the model. This decomposition is further illustrated on simulations studies. It is also shown that this score should be used in comparison with a reference model, such as a nonparametric estimator that does not include the covariates. Finally, the score is applied for the prediction of hospitalisations on a dataset of patients suffering from atrial fibrillation and a comparison of the prediction performances of different models, such as the Cox model, the Aalen Model or the Ghosh and Lin model, is investigated.
{"title":"Assessing model prediction performance for the expected cumulative number of recurrent events.","authors":"Olivier Bouaziz","doi":"10.1007/s10985-023-09610-x","DOIUrl":"10.1007/s10985-023-09610-x","url":null,"abstract":"<p><p>In a recurrent event setting, we introduce a new score designed to evaluate the prediction ability, for a given model, of the expected cumulative number of recurrent events. This score can be seen as an extension of the Brier Score for single time to event data but works for recurrent events with or without a terminal event. Theoretical results are provided that show that under standard assumptions in a recurrent event context, our score can be asymptotically decomposed as the sum of the theoretical mean squared error between the model and the true expected cumulative number of recurrent events and an inseparability term that does not depend on the model. This decomposition is further illustrated on simulations studies. It is also shown that this score should be used in comparison with a reference model, such as a nonparametric estimator that does not include the covariates. Finally, the score is applied for the prediction of hospitalisations on a dataset of patients suffering from atrial fibrillation and a comparison of the prediction performances of different models, such as the Cox model, the Aalen Model or the Ghosh and Lin model, is investigated.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"262-289"},"PeriodicalIF":1.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136400054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-05-09DOI: 10.1007/s10985-023-09600-z
Yang Qu, Yu Cheng
We propose a screening method for high-dimensional data with ordinal competing risk outcomes, which is time-dependent and model-free. Existing methods are designed for cause-specific variable screening and fail to evaluate how a biomarker is associated with multiple competing events simultaneously. The proposed method utilizes the Volume under the ROC surface (VUS), which measures the concordance between values of a biomarker and event status at certain time points and provides an overall evaluation of the discrimination capacity of a biomarker. We show that the VUS possesses the sure screening property, i.e., true important covariates can be retained with probability tending to one, and the size of the selected set can be bounded with high probability. The VUS appears to be a viable model-free screening metric as compared to some existing methods in simulation studies, and it is especially robust to data contamination. Through an analysis of breast-cancer gene-expression data, we illustrate the unique insights into the overall discriminatory capability provided by the VUS.
{"title":"Volume under the ROC surface for high-dimensional independent screening with ordinal competing risk outcomes.","authors":"Yang Qu, Yu Cheng","doi":"10.1007/s10985-023-09600-z","DOIUrl":"10.1007/s10985-023-09600-z","url":null,"abstract":"<p><p>We propose a screening method for high-dimensional data with ordinal competing risk outcomes, which is time-dependent and model-free. Existing methods are designed for cause-specific variable screening and fail to evaluate how a biomarker is associated with multiple competing events simultaneously. The proposed method utilizes the Volume under the ROC surface (VUS), which measures the concordance between values of a biomarker and event status at certain time points and provides an overall evaluation of the discrimination capacity of a biomarker. We show that the VUS possesses the sure screening property, i.e., true important covariates can be retained with probability tending to one, and the size of the selected set can be bounded with high probability. The VUS appears to be a viable model-free screening metric as compared to some existing methods in simulation studies, and it is especially robust to data contamination. Through an analysis of breast-cancer gene-expression data, we illustrate the unique insights into the overall discriminatory capability provided by the VUS.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"735-751"},"PeriodicalIF":1.3,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9432326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-05-07DOI: 10.1007/s10985-023-09598-4
Hongkai Liang, Xiaoguang Wang, Yingwei Peng, Yi Niu
Clustered and multivariate failure time data are commonly encountered in biomedical studies and a marginal regression approach is often employed to identify the potential risk factors of a failure. We consider a semiparametric marginal Cox proportional hazards model for right-censored survival data with potential correlation. We propose to use a quadratic inference function method based on the generalized method of moments to obtain the optimal hazard ratio estimators. The inverse of the working correlation matrix is represented by the linear combination of basis matrices in the context of the estimating equation. We investigate the asymptotic properties of the regression estimators from the proposed method. The optimality of the hazard ratio estimators is discussed. Our simulation study shows that the estimator from the quadratic inference approach is more efficient than those from existing estimating equation methods whether the working correlation structure is correctly specified or not. Finally, we apply the model and the proposed estimation method to analyze a study of tooth loss and have uncovered new insights that were previously inaccessible using existing methods.
{"title":"Improving marginal hazard ratio estimation using quadratic inference functions.","authors":"Hongkai Liang, Xiaoguang Wang, Yingwei Peng, Yi Niu","doi":"10.1007/s10985-023-09598-4","DOIUrl":"10.1007/s10985-023-09598-4","url":null,"abstract":"<p><p>Clustered and multivariate failure time data are commonly encountered in biomedical studies and a marginal regression approach is often employed to identify the potential risk factors of a failure. We consider a semiparametric marginal Cox proportional hazards model for right-censored survival data with potential correlation. We propose to use a quadratic inference function method based on the generalized method of moments to obtain the optimal hazard ratio estimators. The inverse of the working correlation matrix is represented by the linear combination of basis matrices in the context of the estimating equation. We investigate the asymptotic properties of the regression estimators from the proposed method. The optimality of the hazard ratio estimators is discussed. Our simulation study shows that the estimator from the quadratic inference approach is more efficient than those from existing estimating equation methods whether the working correlation structure is correctly specified or not. Finally, we apply the model and the proposed estimation method to analyze a study of tooth loss and have uncovered new insights that were previously inaccessible using existing methods.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"823-853"},"PeriodicalIF":1.3,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9470989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-09-15DOI: 10.1007/s10985-023-09609-4
David Oakes
I present some personal memories and thoughts on Cox's 1972 paper "Regression Models and Life-Tables".
我对考克斯1972年的论文《回归模型和生命表》提出了一些个人记忆和想法。
{"title":"Cox (1972): recollections and reflections.","authors":"David Oakes","doi":"10.1007/s10985-023-09609-4","DOIUrl":"10.1007/s10985-023-09609-4","url":null,"abstract":"<p><p>I present some personal memories and thoughts on Cox's 1972 paper \"Regression Models and Life-Tables\".</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"699-708"},"PeriodicalIF":1.3,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10235914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-01-20DOI: 10.1007/s10985-022-09588-y
Bernard Rosner, Camden Bay, Robert J Glynn, Gui-Shuang Ying, Maureen G Maguire, Mei-Ling Ting Lee
The Kaplan-Meier estimator is ubiquitously used to estimate survival probabilities for time-to-event data. It is nonparametric, and thus does not require specification of a survival distribution, but it does assume that the risk set at any time t consists of independent observations. This assumption does not hold for data from paired organ systems such as occur in ophthalmology (eyes) or otolaryngology (ears), or for other types of clustered data. In this article, we estimate marginal survival probabilities in the setting of clustered data, and provide confidence limits for these estimates with intra-cluster correlation accounted for by an interval-censored version of the Clayton-Oakes model. We develop a goodness-of-fit test for general bivariate interval-censored data and apply it to the proposed interval-censored version of the Clayton-Oakes model. We also propose a likelihood ratio test for the comparison of survival distributions between two groups in the setting of clustered data under the assumption of a constant between-group hazard ratio. This methodology can be used both for balanced and unbalanced cluster sizes, and also when the cluster size is informative. We compare our test to the ordinary log rank test and the Lin-Wei (LW) test based on the marginal Cox proportional Hazards model with robust standard errors obtained from the sandwich estimator. Simulation results indicate that the ordinary log rank test over-inflates type I error, while the proposed unconditional likelihood ratio test has appropriate type I error and higher power than the LW test. The method is demonstrated in real examples from the Sorbinil Retinopathy Trial, and the Age-Related Macular Degeneration Study. Raw data from these two trials are provided.
{"title":"Estimation and testing for clustered interval-censored bivariate survival data with application using the semi-parametric version of the Clayton-Oakes model.","authors":"Bernard Rosner, Camden Bay, Robert J Glynn, Gui-Shuang Ying, Maureen G Maguire, Mei-Ling Ting Lee","doi":"10.1007/s10985-022-09588-y","DOIUrl":"10.1007/s10985-022-09588-y","url":null,"abstract":"<p><p>The Kaplan-Meier estimator is ubiquitously used to estimate survival probabilities for time-to-event data. It is nonparametric, and thus does not require specification of a survival distribution, but it does assume that the risk set at any time t consists of independent observations. This assumption does not hold for data from paired organ systems such as occur in ophthalmology (eyes) or otolaryngology (ears), or for other types of clustered data. In this article, we estimate marginal survival probabilities in the setting of clustered data, and provide confidence limits for these estimates with intra-cluster correlation accounted for by an interval-censored version of the Clayton-Oakes model. We develop a goodness-of-fit test for general bivariate interval-censored data and apply it to the proposed interval-censored version of the Clayton-Oakes model. We also propose a likelihood ratio test for the comparison of survival distributions between two groups in the setting of clustered data under the assumption of a constant between-group hazard ratio. This methodology can be used both for balanced and unbalanced cluster sizes, and also when the cluster size is informative. We compare our test to the ordinary log rank test and the Lin-Wei (LW) test based on the marginal Cox proportional Hazards model with robust standard errors obtained from the sandwich estimator. Simulation results indicate that the ordinary log rank test over-inflates type I error, while the proposed unconditional likelihood ratio test has appropriate type I error and higher power than the LW test. The method is demonstrated in real examples from the Sorbinil Retinopathy Trial, and the Age-Related Macular Degeneration Study. Raw data from these two trials are provided.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"854-887"},"PeriodicalIF":1.2,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10614833/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9879574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-05-20DOI: 10.1007/s10985-023-09602-x
Daewoo Pak, Jing Ning, Richard J Kryscio, Yu Shen
The Nun study is a well-known longitudinal epidemiology study of aging and dementia that recruited elderly nuns who were not yet diagnosed with dementia (i.e., incident cohort) and who had dementia prior to entry (i.e., prevalent cohort). In such a natural history of disease study, multistate modeling of the combined data from both incident and prevalent cohorts is desirable to improve the efficiency of inference. While important, the multistate modeling approaches for the combined data have been scarcely used in practice because prevalent samples do not provide the exact date of disease onset and do not represent the target population due to left-truncation. In this paper, we demonstrate how to adequately combine both incident and prevalent cohorts to examine risk factors for every possible transition in studying the natural history of dementia. We adapt a four-state nonhomogeneous Markov model to characterize all transitions between different clinical stages, including plausible reversible transitions. The estimating procedure using the combined data leads to efficiency gains for every transition compared to those from the incident cohort data only.
{"title":"Evaluation of the natural history of disease by combining incident and prevalent cohorts: application to the Nun Study.","authors":"Daewoo Pak, Jing Ning, Richard J Kryscio, Yu Shen","doi":"10.1007/s10985-023-09602-x","DOIUrl":"10.1007/s10985-023-09602-x","url":null,"abstract":"<p><p>The Nun study is a well-known longitudinal epidemiology study of aging and dementia that recruited elderly nuns who were not yet diagnosed with dementia (i.e., incident cohort) and who had dementia prior to entry (i.e., prevalent cohort). In such a natural history of disease study, multistate modeling of the combined data from both incident and prevalent cohorts is desirable to improve the efficiency of inference. While important, the multistate modeling approaches for the combined data have been scarcely used in practice because prevalent samples do not provide the exact date of disease onset and do not represent the target population due to left-truncation. In this paper, we demonstrate how to adequately combine both incident and prevalent cohorts to examine risk factors for every possible transition in studying the natural history of dementia. We adapt a four-state nonhomogeneous Markov model to characterize all transitions between different clinical stages, including plausible reversible transitions. The estimating procedure using the combined data leads to efficiency gains for every transition compared to those from the incident cohort data only.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"752-768"},"PeriodicalIF":1.2,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10199741/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9509992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-08-15DOI: 10.1007/s10985-023-09608-5
An-Min Tang, Nian-Sheng Tang, Dalei Yu
We consider a novel class of semiparametric joint models for multivariate longitudinal and survival data with dependent censoring. In these models, unknown-fashion cumulative baseline hazard functions are fitted by a novel class of penalized-splines (P-splines) with linear constraints. The dependence between the failure time of interest and censoring time is accommodated by a normal transformation model, where both nonparametric marginal survival function and censoring function are transformed to standard normal random variables with bivariate normal joint distribution. Based on a hybrid algorithm together with the Metropolis-Hastings algorithm within the Gibbs sampler, we propose a feasible Bayesian method to simultaneously estimate unknown parameters of interest, and to fit baseline survival and censoring functions. Intensive simulation studies are conducted to assess the performance of the proposed method. The use of the proposed method is also illustrated in the analysis of a data set from the International Breast Cancer Study Group.
{"title":"Bayesian semiparametric joint model of multivariate longitudinal and survival data with dependent censoring.","authors":"An-Min Tang, Nian-Sheng Tang, Dalei Yu","doi":"10.1007/s10985-023-09608-5","DOIUrl":"10.1007/s10985-023-09608-5","url":null,"abstract":"<p><p>We consider a novel class of semiparametric joint models for multivariate longitudinal and survival data with dependent censoring. In these models, unknown-fashion cumulative baseline hazard functions are fitted by a novel class of penalized-splines (P-splines) with linear constraints. The dependence between the failure time of interest and censoring time is accommodated by a normal transformation model, where both nonparametric marginal survival function and censoring function are transformed to standard normal random variables with bivariate normal joint distribution. Based on a hybrid algorithm together with the Metropolis-Hastings algorithm within the Gibbs sampler, we propose a feasible Bayesian method to simultaneously estimate unknown parameters of interest, and to fit baseline survival and censoring functions. Intensive simulation studies are conducted to assess the performance of the proposed method. The use of the proposed method is also illustrated in the analysis of a data set from the International Breast Cancer Study Group.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"888-918"},"PeriodicalIF":1.3,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10373335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-07-12DOI: 10.1007/s10985-023-09604-9
Ryan Sun, Dayu Sun, Liang Zhu, Jianguo Sun
In modern biomedical datasets, it is common for recurrent outcomes data to be collected in an incomplete manner. More specifically, information on recurrent events is routinely recorded as a mixture of recurrent event data, panel count data, and panel binary data; we refer to this structure as general mixed recurrent event data. Although the aforementioned data types are individually well-studied, there does not appear to exist an established approach for regression analysis of the three component combination. Often, ad-hoc measures such as imputation or discarding of data are used to homogenize records prior to the analysis, but such measures lead to obvious concerns regarding robustness, loss of efficiency, and other issues. This work proposes a maximum likelihood regression estimation procedure for the combination of general mixed recurrent event data and establishes the asymptotic properties of the proposed estimators. In addition, we generalize the approach to allow for the existence of terminal events, a common complicating feature in recurrent event analysis. Numerical studies and application to the Childhood Cancer Survivor Study suggest that the proposed procedures work well in practical situations.
{"title":"Regression analysis of general mixed recurrent event data.","authors":"Ryan Sun, Dayu Sun, Liang Zhu, Jianguo Sun","doi":"10.1007/s10985-023-09604-9","DOIUrl":"10.1007/s10985-023-09604-9","url":null,"abstract":"<p><p>In modern biomedical datasets, it is common for recurrent outcomes data to be collected in an incomplete manner. More specifically, information on recurrent events is routinely recorded as a mixture of recurrent event data, panel count data, and panel binary data; we refer to this structure as general mixed recurrent event data. Although the aforementioned data types are individually well-studied, there does not appear to exist an established approach for regression analysis of the three component combination. Often, ad-hoc measures such as imputation or discarding of data are used to homogenize records prior to the analysis, but such measures lead to obvious concerns regarding robustness, loss of efficiency, and other issues. This work proposes a maximum likelihood regression estimation procedure for the combination of general mixed recurrent event data and establishes the asymptotic properties of the proposed estimators. In addition, we generalize the approach to allow for the existence of terminal events, a common complicating feature in recurrent event analysis. Numerical studies and application to the Childhood Cancer Survivor Study suggest that the proposed procedures work well in practical situations.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"807-822"},"PeriodicalIF":1.2,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11334736/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9829612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}