Pub Date : 2025-07-01Epub Date: 2025-06-03DOI: 10.1007/s10985-025-09657-y
Sin-Ho Jung
In a typical individually randomized group-treatment (IRGT) trial, subjects are randomized between a control arm and an experimental arm. While the subjects randomized to the control arm are treated individually, those in the experimental arm are assigned to one of clusters for group treatment. By sharing some common frailties, the outcomes of subjects in the same groups tend to be dependent, whereas those in the control arm are independent. In this paper, we consider IRGT trials with time to event outcomes. We modify the two-sample log-rank test to compare the survival data from TRGT trials, and derive its sample size formula. The proposed sample size formula requires specification of marginal survival distributions for the two arms, bivariate survival distribution and cluster size distribution for the experimental arm, and accrual period or accrual rate together with additional follow-up period. In a sample size calculation, either the cluster sizes are given and the number of clusters is calculated or the number of clusters is given at the time of study open and the required accrual period to determine the cluster sizes is calculated. Simulations and a real data example show that the proposed test statistic controls the type I error rate and the formula provides accurately powered sample sizes. Also proposed are optimal designs minimizing the total sample size or the total cost when the cost per subject is different between two treatment arms.
{"title":"Design and analysis of individually randomized group-treatment trials with time to event outcomes.","authors":"Sin-Ho Jung","doi":"10.1007/s10985-025-09657-y","DOIUrl":"10.1007/s10985-025-09657-y","url":null,"abstract":"<p><p>In a typical individually randomized group-treatment (IRGT) trial, subjects are randomized between a control arm and an experimental arm. While the subjects randomized to the control arm are treated individually, those in the experimental arm are assigned to one of clusters for group treatment. By sharing some common frailties, the outcomes of subjects in the same groups tend to be dependent, whereas those in the control arm are independent. In this paper, we consider IRGT trials with time to event outcomes. We modify the two-sample log-rank test to compare the survival data from TRGT trials, and derive its sample size formula. The proposed sample size formula requires specification of marginal survival distributions for the two arms, bivariate survival distribution and cluster size distribution for the experimental arm, and accrual period or accrual rate together with additional follow-up period. In a sample size calculation, either the cluster sizes are given and the number of clusters is calculated or the number of clusters is given at the time of study open and the required accrual period to determine the cluster sizes is calculated. Simulations and a real data example show that the proposed test statistic controls the type I error rate and the formula provides accurately powered sample sizes. Also proposed are optimal designs minimizing the total sample size or the total cost when the cost per subject is different between two treatment arms.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"574-594"},"PeriodicalIF":1.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-03-16DOI: 10.1007/s10985-025-09647-0
Myrthe D'Haen, Ingrid Van Keilegom, Anneleen Verhasselt
The study of survival data often requires taking proper care of the censoring mechanism that prohibits complete observation of the data. Under right censoring, only the first occurring event is observed: either the event of interest, or a competing event like withdrawal of a subject from the study. The corresponding identifiability difficulties led many authors to imposing (conditional) independence or a fully known dependence between survival and censoring times, both of which are not always realistic. However, recent results in survival literature showed that parametric copula models allow identification of all model parameters, including the association parameter, under appropriately chosen marginal distributions. The present paper is the first one to apply such models in a quantile regression context, hence benefiting from its well-known advantages in terms of e.g. robustness and richer inference results. The parametric copula is supplemented with a likewise parametric, yet flexible, enriched asymmetric Laplace distribution for the survival times conditional on the covariates. Its asymmetric Laplace basis provides its close connection to quantiles, while the extension with Laguerre orthogonal polynomials ensures sufficient flexibility for increasing polynomial degrees. The distributional flavour of the quantile regression presented, comes with advantages of both theoretical and computational nature. All model parameters are proven to be identifiable, consistent, and asymptotically normal. Finally, performance of the model and of the proposed estimation procedure is assessed through extensive simulation studies as well as an application on liver transplant data.
{"title":"Quantile regression under dependent censoring with unknown association.","authors":"Myrthe D'Haen, Ingrid Van Keilegom, Anneleen Verhasselt","doi":"10.1007/s10985-025-09647-0","DOIUrl":"10.1007/s10985-025-09647-0","url":null,"abstract":"<p><p>The study of survival data often requires taking proper care of the censoring mechanism that prohibits complete observation of the data. Under right censoring, only the first occurring event is observed: either the event of interest, or a competing event like withdrawal of a subject from the study. The corresponding identifiability difficulties led many authors to imposing (conditional) independence or a fully known dependence between survival and censoring times, both of which are not always realistic. However, recent results in survival literature showed that parametric copula models allow identification of all model parameters, including the association parameter, under appropriately chosen marginal distributions. The present paper is the first one to apply such models in a quantile regression context, hence benefiting from its well-known advantages in terms of e.g. robustness and richer inference results. The parametric copula is supplemented with a likewise parametric, yet flexible, enriched asymmetric Laplace distribution for the survival times conditional on the covariates. Its asymmetric Laplace basis provides its close connection to quantiles, while the extension with Laguerre orthogonal polynomials ensures sufficient flexibility for increasing polynomial degrees. The distributional flavour of the quantile regression presented, comes with advantages of both theoretical and computational nature. All model parameters are proven to be identifiable, consistent, and asymptotically normal. Finally, performance of the model and of the proposed estimation procedure is assessed through extensive simulation studies as well as an application on liver transplant data.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"253-299"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143639770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-04-05DOI: 10.1007/s10985-025-09648-z
Clara Bertinelli Salucci, Azzeddine Bakdi, Ingrid Kristine Glad, Bo Henry Lindqvist, Erik Vanem, Riccardo De Bin
In the context of time-to-event analysis, First hitting time methods consider the event occurrence as the ending point of some evolving process. The characteristics of the process are of great relevance for the analysis, which makes this class of models interesting and particularly suitable for applications where something about the degradation path is known. In cases where the degradation can only worsen, a monotonic process is the most suitable choice. This paper proposes a boosting algorithm for first hitting time models based on an underlying homogeneous gamma process to account for the monotonicity of the degradation trend. The predictive power and versatility of the algorithm are shown with real data examples from both engineering and biomedical applications, as well as with simulated examples.
{"title":"Lifetime analysis with monotonic degradation: a boosted first hitting time model based on a homogeneous gamma process.","authors":"Clara Bertinelli Salucci, Azzeddine Bakdi, Ingrid Kristine Glad, Bo Henry Lindqvist, Erik Vanem, Riccardo De Bin","doi":"10.1007/s10985-025-09648-z","DOIUrl":"10.1007/s10985-025-09648-z","url":null,"abstract":"<p><p>In the context of time-to-event analysis, First hitting time methods consider the event occurrence as the ending point of some evolving process. The characteristics of the process are of great relevance for the analysis, which makes this class of models interesting and particularly suitable for applications where something about the degradation path is known. In cases where the degradation can only worsen, a monotonic process is the most suitable choice. This paper proposes a boosting algorithm for first hitting time models based on an underlying homogeneous gamma process to account for the monotonicity of the degradation trend. The predictive power and versatility of the algorithm are shown with real data examples from both engineering and biomedical applications, as well as with simulated examples.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"300-339"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12043765/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143789376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-04-26DOI: 10.1007/s10985-025-09653-2
Cecilia Castro, Marta Azevedo, Víctor Leiva, Luís Meira-Machado
We propose a new goodness-of-fit procedure designed to verify the reciprocal property that characterizes the fatigue-life or Birnbaum-Saunders (BS) distribution. Under this property, scaling a random variable that takes positive values by its median results in the same distribution as its reciprocal, a feature frequently encountered in reliability and survival studies. Our procedure employs total time on test (TTT) curves to compare the behavior of the observed data and its reciprocal counterpart, capturing both local and global discrepancies through supremum- and area-based statistics. We establish the theoretical validity of these statistics under mild assumptions, showing that they deliver accurate inference for moderate to large samples. Simulation evidence indicates that our TTT-based procedures are sensitive to subtle departures from log-symmetry, particularly when the distribution underlying the data has heavier or lighter tails than the assumed one. Illustrative real data examples further reveal how overlooking deviations from the reciprocal property can distort reliability estimates and predictions of failure times, showing the practical importance of the new goodness-of-fit procedure. Overall, our findings strengthen the BS framework and provide robust tools for model validation and selection when log-symmetric modeling assumptions are in place.
{"title":"Total time on test-based goodness-of-fit statistics for the reciprocal property in fatigue-life models.","authors":"Cecilia Castro, Marta Azevedo, Víctor Leiva, Luís Meira-Machado","doi":"10.1007/s10985-025-09653-2","DOIUrl":"https://doi.org/10.1007/s10985-025-09653-2","url":null,"abstract":"<p><p>We propose a new goodness-of-fit procedure designed to verify the reciprocal property that characterizes the fatigue-life or Birnbaum-Saunders (BS) distribution. Under this property, scaling a random variable that takes positive values by its median results in the same distribution as its reciprocal, a feature frequently encountered in reliability and survival studies. Our procedure employs total time on test (TTT) curves to compare the behavior of the observed data and its reciprocal counterpart, capturing both local and global discrepancies through supremum- and area-based statistics. We establish the theoretical validity of these statistics under mild assumptions, showing that they deliver accurate inference for moderate to large samples. Simulation evidence indicates that our TTT-based procedures are sensitive to subtle departures from log-symmetry, particularly when the distribution underlying the data has heavier or lighter tails than the assumed one. Illustrative real data examples further reveal how overlooking deviations from the reciprocal property can distort reliability estimates and predictions of failure times, showing the practical importance of the new goodness-of-fit procedure. Overall, our findings strengthen the BS framework and provide robust tools for model validation and selection when log-symmetric modeling assumptions are in place.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"31 2","pages":"422-441"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-03-31DOI: 10.1007/s10985-025-09649-y
Cunjin Zhao, Peijie Wang, Jianguo Sun
Double truncation commonly occurs in astronomy, epidemiology and economics. Compared to one-sided truncation, double truncation, which combines both left and right truncation, is more challenging to handle and the methods for analyzing doubly truncated data are limited. For the situation, a common approach is to perform conditional analysis conditional on truncation times, which is simple but may not be efficient. Corresponding to this, we propose a pairwise pseudo-likelihood approach that aims to recover some information missed in the conditional methods and can yield more efficient estimation. The resulting estimator is shown to be consistent and asymptotically normal. An extensive simulation study indicates that the proposed procedure works well in practice and is indeed more efficient than the conditional approach. The proposed methodology applied to an AIDS study.
{"title":"A pairwise pseudo-likelihood approach for regression analysis of doubly truncated data.","authors":"Cunjin Zhao, Peijie Wang, Jianguo Sun","doi":"10.1007/s10985-025-09649-y","DOIUrl":"10.1007/s10985-025-09649-y","url":null,"abstract":"<p><p>Double truncation commonly occurs in astronomy, epidemiology and economics. Compared to one-sided truncation, double truncation, which combines both left and right truncation, is more challenging to handle and the methods for analyzing doubly truncated data are limited. For the situation, a common approach is to perform conditional analysis conditional on truncation times, which is simple but may not be efficient. Corresponding to this, we propose a pairwise pseudo-likelihood approach that aims to recover some information missed in the conditional methods and can yield more efficient estimation. The resulting estimator is shown to be consistent and asymptotically normal. An extensive simulation study indicates that the proposed procedure works well in practice and is indeed more efficient than the conditional approach. The proposed methodology applied to an AIDS study.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"340-363"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143755562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-03-04DOI: 10.1007/s10985-025-09646-1
Marija Cuparić, Bojana Milošević
Here we revisit a goodness-of-fit testing problem for randomly right-censored data in the presence of cured subjects, i.e. the population consists of two parts: the cured or non-susceptible group, who will never experience the event of interest versus those who will undergo the event of interest when followed up sufficiently long. We consider the modifications of proposed characterization-based goodness-of-fit tests for the exponential distribution constructed via the inverse probability of censoring weighted U- or V-approach. We present their asymptotic properties and extend our discussion to encompass suitable generalizations applicable to a variety of tests formulated using the same methodology. A comparative power study of these proposed tests against a recent CvM-based competitor and the modifications of the most prominent competitors identified in prior studies that did not consider the presence of cured subjects, demonstrates good finite sample performance. Novel tests are illustrated on a real dataset related to leukemia relapse.
{"title":"Goodness-of-fit testing in the presence of cured data: IPCW approach.","authors":"Marija Cuparić, Bojana Milošević","doi":"10.1007/s10985-025-09646-1","DOIUrl":"10.1007/s10985-025-09646-1","url":null,"abstract":"<p><p>Here we revisit a goodness-of-fit testing problem for randomly right-censored data in the presence of cured subjects, i.e. the population consists of two parts: the cured or non-susceptible group, who will never experience the event of interest versus those who will undergo the event of interest when followed up sufficiently long. We consider the modifications of proposed characterization-based goodness-of-fit tests for the exponential distribution constructed via the inverse probability of censoring weighted U- or V-approach. We present their asymptotic properties and extend our discussion to encompass suitable generalizations applicable to a variety of tests formulated using the same methodology. A comparative power study of these proposed tests against a recent CvM-based competitor and the modifications of the most prominent competitors identified in prior studies that did not consider the presence of cured subjects, demonstrates good finite sample performance. Novel tests are illustrated on a real dataset related to leukemia relapse.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"233-252"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-04-14DOI: 10.1007/s10985-025-09652-3
Xinyuan Chen, Liangyuan Hu, Fan Li
In longitudinal observational studies with time-to-event outcomes, a common objective in causal analysis is to estimate the causal survival curve under hypothetical intervention scenarios. The g-formula is a useful tool for this analysis. To enhance the traditional parametric g-formula, we developed an alternative g-formula estimator, which incorporates the Bayesian Additive Regression Trees into the modeling of the time-evolving generative components, aiming to mitigate the bias due to model misspecification. We focus on binary time-varying treatments and introduce a general class of g-formulas for discrete survival data that can incorporate longitudinal balancing scores. The minimum sufficient formulation of these longitudinal balancing scores is linked to the nature of treatment strategies, i.e., static or dynamic. For each type of treatment strategy, we provide posterior sampling algorithms. We conducted simulations to illustrate the empirical performance of the proposed method and demonstrate its practical utility using data from the Yale New Haven Health System's electronic health records.
{"title":"A flexible Bayesian g-formula for causal survival analyses with time-dependent confounding.","authors":"Xinyuan Chen, Liangyuan Hu, Fan Li","doi":"10.1007/s10985-025-09652-3","DOIUrl":"https://doi.org/10.1007/s10985-025-09652-3","url":null,"abstract":"<p><p>In longitudinal observational studies with time-to-event outcomes, a common objective in causal analysis is to estimate the causal survival curve under hypothetical intervention scenarios. The g-formula is a useful tool for this analysis. To enhance the traditional parametric g-formula, we developed an alternative g-formula estimator, which incorporates the Bayesian Additive Regression Trees into the modeling of the time-evolving generative components, aiming to mitigate the bias due to model misspecification. We focus on binary time-varying treatments and introduce a general class of g-formulas for discrete survival data that can incorporate longitudinal balancing scores. The minimum sufficient formulation of these longitudinal balancing scores is linked to the nature of treatment strategies, i.e., static or dynamic. For each type of treatment strategy, we provide posterior sampling algorithms. We conducted simulations to illustrate the empirical performance of the proposed method and demonstrate its practical utility using data from the Yale New Haven Health System's electronic health records.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"31 2","pages":"394-421"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-04-15DOI: 10.1007/s10985-025-09650-5
Omar Vazquez, Sharon X Xie
Survival data is doubly truncated when only participants who experience an event during a random interval are included in the sample. Existing methods typically correct for double truncation bias in Cox regression through inverse probability weighting via the nonparametric maximum likelihood estimate (NPMLE) of the selection probabilities. This approach relies on two key assumptions, quasi-independent truncation and positivity of the sampling probabilities, yet there are no methods available to thoroughly assess these assumptions in the regression context. Furthermore, these estimators can be particularly sensitive to extreme event times. Finally, current double truncation methods rely on bootstrapping for variance estimation. Aside from the unnecessary computational burden, there are often identifiability issues with the NPMLE during bootstrap resampling. To address these limitations of current methods, we propose a class of robust Cox regression coefficient estimators with time-varying inverse probability weights and extend these estimators to conduct sensitivity analysis regarding possible non-positivity of the sampling probabilities. Also, we develop a nonparametric test and graphical diagnostic for verifying the quasi-independent truncation assumption. Finally, we provide closed-form standard errors for the NPMLE as well as for the proposed estimators. The proposed estimators are evaluated through extensive simulations and illustrated using an AIDS study.
{"title":"Robust inverse probability weighted estimators for doubly truncated Cox regression with closed-form standard errors.","authors":"Omar Vazquez, Sharon X Xie","doi":"10.1007/s10985-025-09650-5","DOIUrl":"10.1007/s10985-025-09650-5","url":null,"abstract":"<p><p>Survival data is doubly truncated when only participants who experience an event during a random interval are included in the sample. Existing methods typically correct for double truncation bias in Cox regression through inverse probability weighting via the nonparametric maximum likelihood estimate (NPMLE) of the selection probabilities. This approach relies on two key assumptions, quasi-independent truncation and positivity of the sampling probabilities, yet there are no methods available to thoroughly assess these assumptions in the regression context. Furthermore, these estimators can be particularly sensitive to extreme event times. Finally, current double truncation methods rely on bootstrapping for variance estimation. Aside from the unnecessary computational burden, there are often identifiability issues with the NPMLE during bootstrap resampling. To address these limitations of current methods, we propose a class of robust Cox regression coefficient estimators with time-varying inverse probability weights and extend these estimators to conduct sensitivity analysis regarding possible non-positivity of the sampling probabilities. Also, we develop a nonparametric test and graphical diagnostic for verifying the quasi-independent truncation assumption. Finally, we provide closed-form standard errors for the NPMLE as well as for the proposed estimators. The proposed estimators are evaluated through extensive simulations and illustrated using an AIDS study.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"31 2","pages":"364-393"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12043752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144049810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-04-14DOI: 10.1007/s10985-025-09651-4
Iván Díaz, Nicholas Williams, Katherine L Hoffman, Nima S Hejazi
The published version of the manuscript (D´iaz, Hoffman, Hejazi Lifetime Data Anal 30, 213-236, 2024) contained an error (We would like to thank Kara Rudolph for pointing out an issue that led to uncovering the error)) in the definition of the outcome that had cascading effects and created errors in the definition of multiple objects in the paper. We correct those errors here. For completeness, we reproduce the entire manuscript, underlining places where we made a correction.Longitudinal modified treatment policies (LMTP) have been recently developed as a novel method to define and estimate causal parameters that depend on the natural value of treatment. LMTPs represent an important advancement in causal inference for longitudinal studies as they allow the non-parametric definition and estimation of the joint effect of multiple categorical, ordinal, or continuous treatments measured at several time points. We extend the LMTP methodology to problems in which the outcome is a time-to-event variable subject to a competing event that precludes observation of the event of interest. We present identification results and non-parametric locally efficient estimators that use flexible data-adaptive regression techniques to alleviate model misspecification bias, while retaining important asymptotic properties such as -consistency. We present an application to the estimation of the effect of the time-to-intubation on acute kidney injury amongst COVID- 19 hospitalized patients, where death by other causes is taken to be the competing event.
{"title":"Author correction to: \"causal survival analysis under competing risks using longitudinal modified treatment policies\".","authors":"Iván Díaz, Nicholas Williams, Katherine L Hoffman, Nima S Hejazi","doi":"10.1007/s10985-025-09651-4","DOIUrl":"https://doi.org/10.1007/s10985-025-09651-4","url":null,"abstract":"<p><p>The published version of the manuscript (D´iaz, Hoffman, Hejazi Lifetime Data Anal 30, 213-236, 2024) contained an error (We would like to thank Kara Rudolph for pointing out an issue that led to uncovering the error)) in the definition of the outcome that had cascading effects and created errors in the definition of multiple objects in the paper. We correct those errors here. For completeness, we reproduce the entire manuscript, underlining places where we made a correction.Longitudinal modified treatment policies (LMTP) have been recently developed as a novel method to define and estimate causal parameters that depend on the natural value of treatment. LMTPs represent an important advancement in causal inference for longitudinal studies as they allow the non-parametric definition and estimation of the joint effect of multiple categorical, ordinal, or continuous treatments measured at several time points. We extend the LMTP methodology to problems in which the outcome is a time-to-event variable subject to a competing event that precludes observation of the event of interest. We present identification results and non-parametric locally efficient estimators that use flexible data-adaptive regression techniques to alleviate model misspecification bias, while retaining important asymptotic properties such as <math><msqrt><mi>n</mi></msqrt> </math> -consistency. We present an application to the estimation of the effect of the time-to-intubation on acute kidney injury amongst COVID- 19 hospitalized patients, where death by other causes is taken to be the competing event.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"31 2","pages":"442-471"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144025067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-10-23DOI: 10.1007/s10985-024-09640-z
Sakie J Arachchige, Xinyuan Chen, Qian M Zhou
We propose a two-stage estimation procedure for a copula-based model with semi-competing risks data, where the non-terminal event is subject to dependent censoring by the terminal event, and both events are subject to independent censoring. With a copula-based model, the marginal survival functions of individual event times are specified by semiparametric transformation models, and the dependence between the bivariate event times is specified by a parametric copula function. For the estimation procedure, in the first stage, the parameters associated with the marginal of the terminal event are estimated using only the corresponding observed outcomes, and in the second stage, the marginal parameters for the non-terminal event time and the copula parameter are estimated together via maximizing a pseudo-likelihood function based on the joint distribution of the bivariate event times. We derived the asymptotic properties of the proposed estimator and provided an analytic variance estimator for inference. Through simulation studies, we showed that our approach leads to consistent estimates with less computational cost and more robustness than the one-stage procedure developed in Chen YH (Lifetime Data Anal 18:36-57, 2012), where all parameters were estimated simultaneously. In addition, our approach demonstrates more desirable finite-sample performances over another existing two-stage estimation method proposed in Zhu H et al., (Commu Statistics-Theory Methods 51(22):7830-7845, 2021) . An R package PMLE4SCR is developed to implement our proposed method.
在半竞争风险数据中,非终端事件受终端事件的依赖性剔除影响,而两个事件均受独立剔除影响,我们提出了一种基于 copula 模型的两阶段估计程序。在基于 copula 的模型中,单个事件时间的边际生存函数由半参数转换模型指定,而二元事件时间之间的依赖关系由参数 copula 函数指定。在估计过程中,第一阶段仅使用相应的观测结果来估计与终端事件边际相关的参数,第二阶段则通过最大化基于二元事件时间联合分布的伪似然函数来共同估计非终端事件时间的边际参数和 copula 参数。我们推导出了拟议估计器的渐近特性,并提供了用于推理的解析方差估计器。通过模拟研究,我们发现与 Chen YH(Lifetime Data Anal 18:36-57, 2012)中开发的同时估计所有参数的单阶段程序相比,我们的方法能以更低的计算成本和更高的稳健性获得一致的估计结果。此外,我们的方法比 Zhu H 等人(Commu Statistics-Theory Methods 51(22):7830-7845, 2021)提出的另一种现有两阶段估计方法具有更理想的有限样本性能。为了实现我们提出的方法,我们开发了一个 R 包 PMLE4SCR。
{"title":"Two-stage pseudo maximum likelihood estimation of semiparametric copula-based regression models for semi-competing risks data.","authors":"Sakie J Arachchige, Xinyuan Chen, Qian M Zhou","doi":"10.1007/s10985-024-09640-z","DOIUrl":"10.1007/s10985-024-09640-z","url":null,"abstract":"<p><p>We propose a two-stage estimation procedure for a copula-based model with semi-competing risks data, where the non-terminal event is subject to dependent censoring by the terminal event, and both events are subject to independent censoring. With a copula-based model, the marginal survival functions of individual event times are specified by semiparametric transformation models, and the dependence between the bivariate event times is specified by a parametric copula function. For the estimation procedure, in the first stage, the parameters associated with the marginal of the terminal event are estimated using only the corresponding observed outcomes, and in the second stage, the marginal parameters for the non-terminal event time and the copula parameter are estimated together via maximizing a pseudo-likelihood function based on the joint distribution of the bivariate event times. We derived the asymptotic properties of the proposed estimator and provided an analytic variance estimator for inference. Through simulation studies, we showed that our approach leads to consistent estimates with less computational cost and more robustness than the one-stage procedure developed in Chen YH (Lifetime Data Anal 18:36-57, 2012), where all parameters were estimated simultaneously. In addition, our approach demonstrates more desirable finite-sample performances over another existing two-stage estimation method proposed in Zhu H et al., (Commu Statistics-Theory Methods 51(22):7830-7845, 2021) . An R package PMLE4SCR is developed to implement our proposed method.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"52-75"},"PeriodicalIF":1.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}