Pub Date : 2025-10-01Epub Date: 2025-09-22DOI: 10.1007/s10985-025-09668-9
Ariane Cwiling, Vittorio Perduca, Olivier Bouaziz
In the context of right-censored data, we study the problem of predicting the restricted time to event based on a set of covariates. Under a quadratic loss, this problem is equivalent to estimating the conditional restricted mean survival time (RMST). To that aim, we propose a flexible and easy-to-use ensemble algorithm that combines pseudo-observations and super learner. The classical theoretical results of the super learner are extended to right-censored data, using a new definition of pseudo-observations, the so-called split pseudo-observations. Simulation studies indicate that the split pseudo-observations and the standard pseudo-observations are similar even for small sample sizes. The method is applied to maintenance and colon cancer datasets, showing the interest of the method in practice, as compared to other prediction methods. We complement the predictions obtained from our method with our RMST-adapted risk measure, prediction intervals and variable importance measures developed in a previous work.
{"title":"Pseudo-observations and super learner for the estimation of the restricted mean survival time.","authors":"Ariane Cwiling, Vittorio Perduca, Olivier Bouaziz","doi":"10.1007/s10985-025-09668-9","DOIUrl":"10.1007/s10985-025-09668-9","url":null,"abstract":"<p><p>In the context of right-censored data, we study the problem of predicting the restricted time to event based on a set of covariates. Under a quadratic loss, this problem is equivalent to estimating the conditional restricted mean survival time (RMST). To that aim, we propose a flexible and easy-to-use ensemble algorithm that combines pseudo-observations and super learner. The classical theoretical results of the super learner are extended to right-censored data, using a new definition of pseudo-observations, the so-called split pseudo-observations. Simulation studies indicate that the split pseudo-observations and the standard pseudo-observations are similar even for small sample sizes. The method is applied to maintenance and colon cancer datasets, showing the interest of the method in practice, as compared to other prediction methods. We complement the predictions obtained from our method with our RMST-adapted risk measure, prediction intervals and variable importance measures developed in a previous work.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"713-746"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145115022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-28DOI: 10.1007/s10985-025-09669-8
Morten Overgaard
Weighting with the inverse probability of censoring is an approach to deal with censoring in regression analyses where the outcome may be missing due to right-censoring. In this paper, three separate approaches involving this idea in a setting where the Kaplan-Meier estimator is used for estimating the censoring probability are compared. In more detail, the three approaches involve weighted regression, regression with a weighted outcome, and regression of a jack-knife pseudo-observation based on a weighted estimator. Expressions of the asymptotic variances are given in each case and the expressions are compared to each other and to the uncensored case. In terms of low asymptotic variance, a clear winner cannot be found. Which approach will have the lowest asymptotic variance depends on the censoring distribution. Expressions of the limit of the standard sandwich variance estimator in the three cases are also provided, revealing an overestimation under the implied assumptions.
{"title":"A comparison of Kaplan-Meier-based inverse probability of censoring weighted regression methods.","authors":"Morten Overgaard","doi":"10.1007/s10985-025-09669-8","DOIUrl":"10.1007/s10985-025-09669-8","url":null,"abstract":"<p><p>Weighting with the inverse probability of censoring is an approach to deal with censoring in regression analyses where the outcome may be missing due to right-censoring. In this paper, three separate approaches involving this idea in a setting where the Kaplan-Meier estimator is used for estimating the censoring probability are compared. In more detail, the three approaches involve weighted regression, regression with a weighted outcome, and regression of a jack-knife pseudo-observation based on a weighted estimator. Expressions of the asymptotic variances are given in each case and the expressions are compared to each other and to the uncensored case. In terms of low asymptotic variance, a clear winner cannot be found. Which approach will have the lowest asymptotic variance depends on the censoring distribution. Expressions of the limit of the standard sandwich variance estimator in the three cases are also provided, revealing an overestimation under the implied assumptions.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"747-783"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12586238/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145394928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-21DOI: 10.1007/s10985-025-09674-x
Annika Strömer, Nadja Klein, Ingrid Van Keilegom, Andreas Mayr
{"title":"Modelling dependent censoring in time-to-event data using boosting copula regression.","authors":"Annika Strömer, Nadja Klein, Ingrid Van Keilegom, Andreas Mayr","doi":"10.1007/s10985-025-09674-x","DOIUrl":"10.1007/s10985-025-09674-x","url":null,"abstract":"","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"994-1016"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12586418/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145338136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-15DOI: 10.1007/s10985-025-09672-z
Yiyuan Huang, Douglas Schaubel, Min Zhang
In many clinical trials, one is interested in evaluating the treatment effect based on different types of outcomes, including recurrent and terminal events. The most popular approach is the time-to-first-event analysis (TTFE), based on the composite outcome of the time to the first event among all events of interest. The motivation for the composite outcome approach is to increase the number of events and potentially increase power. Other composite outcome or composite analysis methods are also studied in the literature, but are less adopted in practice. In this article, we first review the mainstream composite analysis methods and classify them into three categories: (A) Composite-outcome Methods, which combine multiple events into a composite outcome before analysis, e.g., combining events into a time-to-event outcome in TTFE and into a single recurrent event process in the combined-recurrent-event analysis (CRE); (B) Joint-analysis Methods, which test for the recurrent event process and the terminal event jointly, e.g., Joint Frailty Model (JFM), Ghosh-Lin Method (GL), and Nelsen-Aalen Method (NA); (C) Win-ratio type Methods that account for the ordering of two types of events, e.g., Win-fraction Regression (WR). We conduct comprehensive simulation studies to evaluate the performance of various types of methods in terms of type I error control and power under a wide range of scenarios. We found that the non-parametric joint testing approach (GL/NA) and CRE have overall the best performance. However, TTFE and WR exhibit relatively low power. Also, adding events that have no or weak association with treatment usually decreases power.
{"title":"Statistical methods for composite analysis of recurrent and terminal events in clinical trials.","authors":"Yiyuan Huang, Douglas Schaubel, Min Zhang","doi":"10.1007/s10985-025-09672-z","DOIUrl":"10.1007/s10985-025-09672-z","url":null,"abstract":"<p><p>In many clinical trials, one is interested in evaluating the treatment effect based on different types of outcomes, including recurrent and terminal events. The most popular approach is the time-to-first-event analysis (TTFE), based on the composite outcome of the time to the first event among all events of interest. The motivation for the composite outcome approach is to increase the number of events and potentially increase power. Other composite outcome or composite analysis methods are also studied in the literature, but are less adopted in practice. In this article, we first review the mainstream composite analysis methods and classify them into three categories: (A) Composite-outcome Methods, which combine multiple events into a composite outcome before analysis, e.g., combining events into a time-to-event outcome in TTFE and into a single recurrent event process in the combined-recurrent-event analysis (CRE); (B) Joint-analysis Methods, which test for the recurrent event process and the terminal event jointly, e.g., Joint Frailty Model (JFM), Ghosh-Lin Method (GL), and Nelsen-Aalen Method (NA); (C) Win-ratio type Methods that account for the ordering of two types of events, e.g., Win-fraction Regression (WR). We conduct comprehensive simulation studies to evaluate the performance of various types of methods in terms of type I error control and power under a wide range of scenarios. We found that the non-parametric joint testing approach (GL/NA) and CRE have overall the best performance. However, TTFE and WR exhibit relatively low power. Also, adding events that have no or weak association with treatment usually decreases power.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"810-829"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145294258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-14DOI: 10.1007/s10985-025-09671-0
Miki Horiguchi, Lu Tian, Kenneth L Kehl, Hajime Uno
Delayed treatment effects on time-to-event outcomes are commonly observed in randomized controlled trials of cancer immunotherapies. When the treatment effect has a delayed onset, the conventional test/estimation approach-using the log-rank test for between-group comparison and Cox's hazard ratio to quantify the treatment effect-can be suboptimal. The log-rank test may lack power in such scenarios, and the interpretation of the hazard ratio is often ambiguous. Recently, alternative test/estimation approaches have been proposed to address these limitations. One such approach is based on long-term restricted mean survival time (LT-RMST), while another is based on average hazard with survival weight (AH-SW). This paper integrates these two concepts and introduces a novel long-term average hazard (LT-AH) approach with survival weight for both hypothesis testing and estimation. Numerical studies highlight specific scenarios where the proposed LT-AH method achieves higher power than the existing alternatives. The LT-AH for each group can be estimated nonparametrically, and the proposed between-group comparison maintains test/estimation coherency. Because the difference and ratio of LT-AH do not rely on model assumptions about the relationship between two groups, the LT-AH approach provides a robust framework for estimating the magnitude of between-group differences. Furthermore, LT-AH allows for treatment effect quantification in both absolute (difference in LT-AH) and relative (ratio of LT-AH) terms, aligning with guideline recommendations and addressing practical needs. Given its interpretability and improved power in certain settings, the proposed LT-AH approach offers a useful alternative to conventional hazard-based methods, particularly when delayed treatment effects are expected.
{"title":"Assessing delayed treatment benefits of immunotherapy using long-term average hazard: a novel test/estimation approach.","authors":"Miki Horiguchi, Lu Tian, Kenneth L Kehl, Hajime Uno","doi":"10.1007/s10985-025-09671-0","DOIUrl":"10.1007/s10985-025-09671-0","url":null,"abstract":"<p><p>Delayed treatment effects on time-to-event outcomes are commonly observed in randomized controlled trials of cancer immunotherapies. When the treatment effect has a delayed onset, the conventional test/estimation approach-using the log-rank test for between-group comparison and Cox's hazard ratio to quantify the treatment effect-can be suboptimal. The log-rank test may lack power in such scenarios, and the interpretation of the hazard ratio is often ambiguous. Recently, alternative test/estimation approaches have been proposed to address these limitations. One such approach is based on long-term restricted mean survival time (LT-RMST), while another is based on average hazard with survival weight (AH-SW). This paper integrates these two concepts and introduces a novel long-term average hazard (LT-AH) approach with survival weight for both hypothesis testing and estimation. Numerical studies highlight specific scenarios where the proposed LT-AH method achieves higher power than the existing alternatives. The LT-AH for each group can be estimated nonparametrically, and the proposed between-group comparison maintains test/estimation coherency. Because the difference and ratio of LT-AH do not rely on model assumptions about the relationship between two groups, the LT-AH approach provides a robust framework for estimating the magnitude of between-group differences. Furthermore, LT-AH allows for treatment effect quantification in both absolute (difference in LT-AH) and relative (ratio of LT-AH) terms, aligning with guideline recommendations and addressing practical needs. Given its interpretability and improved power in certain settings, the proposed LT-AH approach offers a useful alternative to conventional hazard-based methods, particularly when delayed treatment effects are expected.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"784-809"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12586407/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145287554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-11DOI: 10.1007/s10985-025-09665-y
Zhiguo Li
Data analysis methods have been well developed for analyzing data to make inferences about adaptive treatment strategies in sequential multiple assignment randomized trials (SMART), when data are continuous or right-censored. However, in some clinical studies, time-to-event outcomes are interval censored, meaning that, for example, the time of interest is only observed between two random visit times to the clinic, which is common in some areas such as psychology studies. In this case, the appropriate analysis methods in SMART studies have not been considered in the literature. This article tries to fill this gap by developing methods for this purpose. Based on a proportional hazards model, we propose to use a weighted spline-based sieve maximum likelihood method to make inference about the group differences using a Wald test. Asymptotic properties of the estimator for the hazard ratio are derived, and variance estimation is considered. We conduct a simulation to assess its finite sample performance, and then analyze data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial.
数据分析方法已经发展得很好,当数据是连续的或右删节的,用于分析数据以推断顺序多任务随机试验(SMART)中的自适应治疗策略。然而,在一些临床研究中,时间到事件的结果是间隔审查的,这意味着,例如,感兴趣的时间只在两次随机就诊时间之间观察到,这在心理学研究等某些领域很常见。在这种情况下,文献中没有考虑SMART研究中适当的分析方法。本文试图通过开发用于此目的的方法来填补这一空白。基于比例风险模型,我们建议使用加权样条筛选最大似然方法来推断使用Wald检验的组差异。导出了风险比估计量的渐近性质,并考虑了方差估计。我们进行模拟以评估其有限样本性能,然后分析来自Sequenced Treatment Alternatives to ease Depression (STAR*D)试验的数据。
{"title":"Analysis of interval censored survival data in sequential multiple assignment randomized trials.","authors":"Zhiguo Li","doi":"10.1007/s10985-025-09665-y","DOIUrl":"10.1007/s10985-025-09665-y","url":null,"abstract":"<p><p>Data analysis methods have been well developed for analyzing data to make inferences about adaptive treatment strategies in sequential multiple assignment randomized trials (SMART), when data are continuous or right-censored. However, in some clinical studies, time-to-event outcomes are interval censored, meaning that, for example, the time of interest is only observed between two random visit times to the clinic, which is common in some areas such as psychology studies. In this case, the appropriate analysis methods in SMART studies have not been considered in the literature. This article tries to fill this gap by developing methods for this purpose. Based on a proportional hazards model, we propose to use a weighted spline-based sieve maximum likelihood method to make inference about the group differences using a Wald test. Asymptotic properties of the estimator for the hazard ratio are derived, and variance estimation is considered. We conduct a simulation to assess its finite sample performance, and then analyze data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"852-868"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144621007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-09DOI: 10.1007/s10985-025-09673-y
Emily M Damone, Matthew A Psioda, Joseph G Ibrahim
Many methods exist to jointly model either recurrent and related terminal survival events or longitudinal outcome measures and related terminal survival event. However, few methods exist which can account for the dependency between all three outcomes of interest, and none allow for the modeling of all three outcomes without strong correlation assumptions. We propose a joint model which uses subject-specific random effects to connect the survival model (terminal and recurrent events) with a longitudinal outcome model. In the proposed method, proportional hazards models with shared frailties are used to model dependence between the recurrent and terminal events, while a separate (but correlated) set of random effects are utilized in a generalized linear mixed model to model dependence with longitudinal outcome measures. All random effects are related based on an assumed multivariate normal distribution. The proposed joint modeling approach allows for flexible models, particularly for unique longitudinal trajectories, that can be utilized in a wide range of health applications. We evaluate the model through simulation studies as well as through an application to data from the Atherosclerosis Risk in Communities (ARIC) study.
{"title":"Bayesian joint models for longitudinal, recurrent, and terminal event data.","authors":"Emily M Damone, Matthew A Psioda, Joseph G Ibrahim","doi":"10.1007/s10985-025-09673-y","DOIUrl":"10.1007/s10985-025-09673-y","url":null,"abstract":"<p><p>Many methods exist to jointly model either recurrent and related terminal survival events or longitudinal outcome measures and related terminal survival event. However, few methods exist which can account for the dependency between all three outcomes of interest, and none allow for the modeling of all three outcomes without strong correlation assumptions. We propose a joint model which uses subject-specific random effects to connect the survival model (terminal and recurrent events) with a longitudinal outcome model. In the proposed method, proportional hazards models with shared frailties are used to model dependence between the recurrent and terminal events, while a separate (but correlated) set of random effects are utilized in a generalized linear mixed model to model dependence with longitudinal outcome measures. All random effects are related based on an assumed multivariate normal distribution. The proposed joint modeling approach allows for flexible models, particularly for unique longitudinal trajectories, that can be utilized in a wide range of health applications. We evaluate the model through simulation studies as well as through an application to data from the Atherosclerosis Risk in Communities (ARIC) study.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"932-949"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-18DOI: 10.1007/s10985-025-09667-w
Wei-En Lu, Ai Ni
In large observational studies with survival outcome and low event rates, the case-cohort design is commonly used to reduce the cost associated with covariate measurement. The restricted mean survival time (RMST) difference has been increasingly used as an alternative to hazard ratio when estimating the causal effect on survival outcomes. We investigate the estimation of marginal causal effect on RMST under the stratified case-cohort design while adjusting for measured confounders through propensity score stratification. The asymptotic normality of the estimator is established, and its variance formula is derived. Simulation studies are performed to evaluate the finite sample performance of the proposed method compared to several alternative methods. Finally, we apply the proposed method to the Atherosclerosis Risk in Communities study to estimate the marginal causal effect of high-sensitivity C-reactive protein level on coronary heart disease-free survival.
{"title":"Causal effect estimation on restricted mean survival time under case-cohort design via propensity score stratification.","authors":"Wei-En Lu, Ai Ni","doi":"10.1007/s10985-025-09667-w","DOIUrl":"10.1007/s10985-025-09667-w","url":null,"abstract":"<p><p>In large observational studies with survival outcome and low event rates, the case-cohort design is commonly used to reduce the cost associated with covariate measurement. The restricted mean survival time (RMST) difference has been increasingly used as an alternative to hazard ratio when estimating the causal effect on survival outcomes. We investigate the estimation of marginal causal effect on RMST under the stratified case-cohort design while adjusting for measured confounders through propensity score stratification. The asymptotic normality of the estimator is established, and its variance formula is derived. Simulation studies are performed to evaluate the finite sample performance of the proposed method compared to several alternative methods. Finally, we apply the proposed method to the Atherosclerosis Risk in Communities study to estimate the marginal causal effect of high-sensitivity C-reactive protein level on coronary heart disease-free survival.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"898-931"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12586416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144876417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-27DOI: 10.1007/s10985-025-09666-x
Yuchen Mao, Lianming Wang, Xuemei Sui
Joint modeling of longitudinal responses and survival time has gained great attention in statistics literature over the last few decades. Most existing works focus on joint analysis of longitudinal data and right-censored data. In this article, we propose a new frailty model for joint analysis of a longitudinal response and interval-censored survival time. Such data commonly arise in real-life studies where participants are examined at periodical or irregular follow-up times. The proposed joint model contains a nonlinear mixed effects submodel for the longitudinal response and a semiparametric probit submodel for the survival time given a shared normal frailty. The proposed joint model allows the regression coefficients to be interpreted as the marginal effects up to a multiplicative constant on both the longitudinal and survival responses. Adopting splines allows us to approximate the unknown baseline functions in both submodels with only a finite number of unknown coefficients while providing great modeling flexibility. An efficient Gibbs sampler is developed for posterior computation, in which all parameters and latent variables can be sampled easily from their full conditional distributions. The proposed method shows a good estimation performance in simulation studies and is further illustrated by a real-life application to the patient data from the Aerobics Center Longitudinal Study. The R code for the proposed methodology is made available for public use.
{"title":"Bayesian joint analysis of longitudinal data and interval-censored failure time data.","authors":"Yuchen Mao, Lianming Wang, Xuemei Sui","doi":"10.1007/s10985-025-09666-x","DOIUrl":"10.1007/s10985-025-09666-x","url":null,"abstract":"<p><p>Joint modeling of longitudinal responses and survival time has gained great attention in statistics literature over the last few decades. Most existing works focus on joint analysis of longitudinal data and right-censored data. In this article, we propose a new frailty model for joint analysis of a longitudinal response and interval-censored survival time. Such data commonly arise in real-life studies where participants are examined at periodical or irregular follow-up times. The proposed joint model contains a nonlinear mixed effects submodel for the longitudinal response and a semiparametric probit submodel for the survival time given a shared normal frailty. The proposed joint model allows the regression coefficients to be interpreted as the marginal effects up to a multiplicative constant on both the longitudinal and survival responses. Adopting splines allows us to approximate the unknown baseline functions in both submodels with only a finite number of unknown coefficients while providing great modeling flexibility. An efficient Gibbs sampler is developed for posterior computation, in which all parameters and latent variables can be sampled easily from their full conditional distributions. The proposed method shows a good estimation performance in simulation studies and is further illustrated by a real-life application to the patient data from the Aerobics Center Longitudinal Study. The R code for the proposed methodology is made available for public use.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"950-969"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-06-14DOI: 10.1007/s10985-025-09658-x
Chi Wing Chu, Hok Kan Ling
We study shape-constrained nonparametric estimation of the underlying survival function in a cross-sectional study without follow-up. Assuming the rate of initiation event is stationary over time, the observed current duration becomes a length-biased and multiplicatively censored counterpart of the underlying failure time of interest. We focus on two shape constraints for the underlying survival function, namely, log-concavity and convexity. The log-concavity constraint is versatile as it allows for log-concave densities, bi-log-concave distributions, increasing densities, and multi-modal densities. We establish the consistency and pointwise asymptotic distribution of the shape-constrained estimators. Specifically, the proposed estimator under log-concavity is consistent and tuning-parameter-free, thus circumventing the well-known inconsistency issue of the Grenander estimator at 0, where correction methods typically involve tuning parameters.
{"title":"Shape-constrained estimation for current duration data in cross-sectional studies.","authors":"Chi Wing Chu, Hok Kan Ling","doi":"10.1007/s10985-025-09658-x","DOIUrl":"10.1007/s10985-025-09658-x","url":null,"abstract":"<p><p>We study shape-constrained nonparametric estimation of the underlying survival function in a cross-sectional study without follow-up. Assuming the rate of initiation event is stationary over time, the observed current duration becomes a length-biased and multiplicatively censored counterpart of the underlying failure time of interest. We focus on two shape constraints for the underlying survival function, namely, log-concavity and convexity. The log-concavity constraint is versatile as it allows for log-concave densities, bi-log-concave distributions, increasing densities, and multi-modal densities. We establish the consistency and pointwise asymptotic distribution of the shape-constrained estimators. Specifically, the proposed estimator under log-concavity is consistent and tuning-parameter-free, thus circumventing the well-known inconsistency issue of the Grenander estimator at 0, where correction methods typically involve tuning parameters.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"595-630"},"PeriodicalIF":1.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144295231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}