Pub Date : 2025-12-18DOI: 10.1007/s10985-025-09682-x
Yuyang Guo, Chunjie Wang, Xiaoyu Liu
Partly interval-censored data with a cure fraction are commonly encountered in epidemiological and biomedical studies, where exact failure times are observed for some subjects while others fall within certain intervals. For cure survival data, two-component mixture cure models that directly model the probability of being uncured and the conditional survival function of susceptible subjects, have attracted considerable attention. However, conventional cure models typically assume linear covariate effects in both components, which may limit their flexibility and applicability for potential nonlinear relationships. In this paper, we propose a flexible semiparametric mixture cure model that incorporates parametric and nonparametric covariate structures for both the cure probability and the event-time distribution of susceptible subjects. We utilize spline-based techniques to approximate unspecified functions and implement a four-stage data augmentation approach to address the complexities inherent in the model and data structure. A computationally convenient Bayesian approach is developed to obtain posterior estimates of the model parameters. The finite-sample performance of the proposed method is evaluated through simulation studies. The practical utility of the approach is demonstrated by an analysis of child mortality data.
{"title":"Bayesian semiparametric partially linear cure models with partly interval-censored data.","authors":"Yuyang Guo, Chunjie Wang, Xiaoyu Liu","doi":"10.1007/s10985-025-09682-x","DOIUrl":"https://doi.org/10.1007/s10985-025-09682-x","url":null,"abstract":"<p><p>Partly interval-censored data with a cure fraction are commonly encountered in epidemiological and biomedical studies, where exact failure times are observed for some subjects while others fall within certain intervals. For cure survival data, two-component mixture cure models that directly model the probability of being uncured and the conditional survival function of susceptible subjects, have attracted considerable attention. However, conventional cure models typically assume linear covariate effects in both components, which may limit their flexibility and applicability for potential nonlinear relationships. In this paper, we propose a flexible semiparametric mixture cure model that incorporates parametric and nonparametric covariate structures for both the cure probability and the event-time distribution of susceptible subjects. We utilize spline-based techniques to approximate unspecified functions and implement a four-stage data augmentation approach to address the complexities inherent in the model and data structure. A computationally convenient Bayesian approach is developed to obtain posterior estimates of the model parameters. The finite-sample performance of the proposed method is evaluated through simulation studies. The practical utility of the approach is demonstrated by an analysis of child mortality data.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"4"},"PeriodicalIF":1.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145783537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1007/s10985-025-09678-7
Reuben Adatorwovor, Yinghao Pan
Independent censoring is a key assumption usually made when analyzing time-to-event data. However, this assumption is difficult to assess and can be problematic, particularly in studies with disproportionate loss to follow-up due to adverse events. This paper addresses the challenges associated with dependent censoring by introducing a likelihood-based approach for analyzing bivariate survival data under dependent censoring. A flexible Joe-Hu copula is used to formulate the interdependence within the quadruple times (two events and two censoring times). The marginal distribution of each event/censoring time is modeled via the Cox proportional hazards model. Our estimator possesses consistency and desirable asymptotic properties under regularity conditions. We present results from extensive simulation studies and further illustrate our approach using prostate cancer data.
{"title":"A flexible copula model for bivariate survival data with dependent censoring.","authors":"Reuben Adatorwovor, Yinghao Pan","doi":"10.1007/s10985-025-09678-7","DOIUrl":"https://doi.org/10.1007/s10985-025-09678-7","url":null,"abstract":"<p><p>Independent censoring is a key assumption usually made when analyzing time-to-event data. However, this assumption is difficult to assess and can be problematic, particularly in studies with disproportionate loss to follow-up due to adverse events. This paper addresses the challenges associated with dependent censoring by introducing a likelihood-based approach for analyzing bivariate survival data under dependent censoring. A flexible Joe-Hu copula is used to formulate the interdependence within the quadruple times (two events and two censoring times). The marginal distribution of each event/censoring time is modeled via the Cox proportional hazards model. Our estimator possesses consistency and desirable asymptotic properties under regularity conditions. We present results from extensive simulation studies and further illustrate our approach using prostate cancer data.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"2"},"PeriodicalIF":1.0,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1007/s10985-025-09677-8
Bingyao Huang, Yanyan Liu, Xin Ye
Limited sample size and censoring inherently limit the statistical efficiency of high-dimensional data analysis. While integrating data from multiple sources can enhance estimation efficiency, concerns remain regarding data privacy breaches and between-site heterogeneity. In this paper, we propose a privacy-preserving approach to integrate the high-dimensional right-censored data with source-level heterogeneity. The proposed method is based on the local computation strategy: each site can obtain an integrative estimation based on its local full dataset and the summary statistics from other sites. For each party, this strategy not only meets the data privacy constraints but also maximizes its local data's utilization. Moreover, we introduce a refined procedure for practical use to avoid the shrinkage of the local covariate effect that is unique across all sites. Theoretical results of the proposed estimates including consistency, asymptotic normality and efficiency gains are attained. Simulation experiments demonstrate its superiority over the integrative methods relying solely on summary statistics and the local estimations. The application to multi-source clinical data of ovarian cancer further verifies its practical effectiveness.
{"title":"Integrating high-dimensional censored data under privacy constraints via localized computations.","authors":"Bingyao Huang, Yanyan Liu, Xin Ye","doi":"10.1007/s10985-025-09677-8","DOIUrl":"https://doi.org/10.1007/s10985-025-09677-8","url":null,"abstract":"<p><p>Limited sample size and censoring inherently limit the statistical efficiency of high-dimensional data analysis. While integrating data from multiple sources can enhance estimation efficiency, concerns remain regarding data privacy breaches and between-site heterogeneity. In this paper, we propose a privacy-preserving approach to integrate the high-dimensional right-censored data with source-level heterogeneity. The proposed method is based on the local computation strategy: each site can obtain an integrative estimation based on its local full dataset and the summary statistics from other sites. For each party, this strategy not only meets the data privacy constraints but also maximizes its local data's utilization. Moreover, we introduce a refined procedure for practical use to avoid the shrinkage of the local covariate effect that is unique across all sites. Theoretical results of the proposed estimates including consistency, asymptotic normality and efficiency gains are attained. Simulation experiments demonstrate its superiority over the integrative methods relying solely on summary statistics and the local estimations. The application to multi-source clinical data of ovarian cancer further verifies its practical effectiveness.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"3"},"PeriodicalIF":1.0,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-05DOI: 10.1007/s10985-025-09683-w
Kalpasree Sharma, Partha Jyoti Hazarika, Mohamed S Eliwa, Mahmoud El-Morshedy
Overdispersion is a phenomenon which is quite common in many real-life count data sets and these variability often results due to an excessive number of zeros. To address this issue, zero-inflated distributions provide a flexible modeling approach capable of capturing high levels of dispersion. In this paper we introduce a new count distribution known as the zero-inflated transmuted geometric distribution. We explore its key statistical properties, reliability aspects and actuarial traits. Additionally we employ different estimation strategies and conduct a simulation study to assess the performance of the estimators. We demonstrate the practical utility of the proposed model through the analysis of three empirical data sets. Lastly, we also carry out the likelihood ratio test to justify the use of the proposed zero-inflated distribution.
{"title":"Reliability and estimation of the zero-inflated transmuted geometric distribution with applications and actuarial insights.","authors":"Kalpasree Sharma, Partha Jyoti Hazarika, Mohamed S Eliwa, Mahmoud El-Morshedy","doi":"10.1007/s10985-025-09683-w","DOIUrl":"https://doi.org/10.1007/s10985-025-09683-w","url":null,"abstract":"<p><p>Overdispersion is a phenomenon which is quite common in many real-life count data sets and these variability often results due to an excessive number of zeros. To address this issue, zero-inflated distributions provide a flexible modeling approach capable of capturing high levels of dispersion. In this paper we introduce a new count distribution known as the zero-inflated transmuted geometric distribution. We explore its key statistical properties, reliability aspects and actuarial traits. Additionally we employ different estimation strategies and conduct a simulation study to assess the performance of the estimators. We demonstrate the practical utility of the proposed model through the analysis of three empirical data sets. Lastly, we also carry out the likelihood ratio test to justify the use of the proposed zero-inflated distribution.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"1"},"PeriodicalIF":1.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145679229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-09-22DOI: 10.1007/s10985-025-09670-1
Léa Orsini, Caroline Brard, Emmanuel Lesaffre, Guosheng Yin, David Dejardin, Gwénaël Le Teuff
Bayesian inference for survival regression modeling offers numerous advantages, especially for decision-making and external data borrowing, but demands the specification of the baseline hazard function, which may be a challenging task. We propose an alternative approach that does not need the specification of this function. Our approach combines pseudo-observations to convert censored data into longitudinal data with the generalized method of moments (GMM) to estimate the parameters of interest from the survival function directly. GMM may be viewed as an extension of the generalized estimating equations (GEE) currently used for frequentist pseudo-observations analysis and can be extended to the Bayesian framework using a pseudo-likelihood function. We assessed the behavior of the frequentist and Bayesian GMM in the new context of analyzing pseudo-observations. We compared their performances to the Cox, GEE, and Bayesian piecewise exponential models through a simulation study of two-arm randomized clinical trials. Frequentist and Bayesian GMMs gave valid inferences with similar performances compared to the three benchmark methods, except for small sample sizes and high censoring rates. For illustration, three post-hoc efficacy analyses were performed on randomized clinical trials involving patients with Ewing Sarcoma, producing results similar to those of the benchmark methods. Through a simple application of estimating hazard ratios, these findings confirm the effectiveness of this new Bayesian approach based on pseudo-observations and the generalized method of moments. This offers new insights on using pseudo-observations for Bayesian survival analysis.
{"title":"Bayesian generalized method of moments applied to pseudo-observations in survival analysis.","authors":"Léa Orsini, Caroline Brard, Emmanuel Lesaffre, Guosheng Yin, David Dejardin, Gwénaël Le Teuff","doi":"10.1007/s10985-025-09670-1","DOIUrl":"10.1007/s10985-025-09670-1","url":null,"abstract":"<p><p>Bayesian inference for survival regression modeling offers numerous advantages, especially for decision-making and external data borrowing, but demands the specification of the baseline hazard function, which may be a challenging task. We propose an alternative approach that does not need the specification of this function. Our approach combines pseudo-observations to convert censored data into longitudinal data with the generalized method of moments (GMM) to estimate the parameters of interest from the survival function directly. GMM may be viewed as an extension of the generalized estimating equations (GEE) currently used for frequentist pseudo-observations analysis and can be extended to the Bayesian framework using a pseudo-likelihood function. We assessed the behavior of the frequentist and Bayesian GMM in the new context of analyzing pseudo-observations. We compared their performances to the Cox, GEE, and Bayesian piecewise exponential models through a simulation study of two-arm randomized clinical trials. Frequentist and Bayesian GMMs gave valid inferences with similar performances compared to the three benchmark methods, except for small sample sizes and high censoring rates. For illustration, three post-hoc efficacy analyses were performed on randomized clinical trials involving patients with Ewing Sarcoma, producing results similar to those of the benchmark methods. Through a simple application of estimating hazard ratios, these findings confirm the effectiveness of this new Bayesian approach based on pseudo-observations and the generalized method of moments. This offers new insights on using pseudo-observations for Bayesian survival analysis.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"970-993"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12586244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145114927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joint modeling of longitudinal outcomes and time-to-event data has been extensively used in medical studies because it can simultaneously model the longitudinal trajectories and assess their effects on the event-time. However, in many applications we come across heterogeneous populations, and therefore the subjects need to be clustered for a powerful statistical inference. We consider multivariate binary longitudinal outcomes for which we use Bayesian data-augmentation and get the corresponding latent continuous outcomes. These latent outcomes are clustered using Bayesian consensus clustering, and then we perform a cluster-specific joint analysis. Longitudinal outcomes are modeled by generalized linear mixed models, and we use the proportional hazards model for modeling time-to-event data. Our work is motivated by a clinical trial conducted by Tata Translational Cancer Research Center, Kolkata, where 184 cancer patients were treated for the first two years, and then were followed for the next three years. Three biomarkers (lymphocyte count, neutrophil count and platelet count), categorized as normal/abnormal, were measured during the treatment, and the relapse time (if any) was recorded for each patient. Our analysis finds three latent clusters for which the effects of the covariates and the median non-relapse probabilities substantially differ. Through a simulation study we illustrate the effectiveness of the proposed simultaneous clustering and joint modeling.
{"title":"Simultaneous clustering and joint modeling of multivariate binary longitudinal and time-to-event data.","authors":"Srijan Chattopadhyay, Sevantee Basu, Swapnaneel Bhattacharyya, Manash Pratim Gogoi, Kiranmoy Das","doi":"10.1007/s10985-025-09664-z","DOIUrl":"10.1007/s10985-025-09664-z","url":null,"abstract":"<p><p>Joint modeling of longitudinal outcomes and time-to-event data has been extensively used in medical studies because it can simultaneously model the longitudinal trajectories and assess their effects on the event-time. However, in many applications we come across heterogeneous populations, and therefore the subjects need to be clustered for a powerful statistical inference. We consider multivariate binary longitudinal outcomes for which we use Bayesian data-augmentation and get the corresponding latent continuous outcomes. These latent outcomes are clustered using Bayesian consensus clustering, and then we perform a cluster-specific joint analysis. Longitudinal outcomes are modeled by generalized linear mixed models, and we use the proportional hazards model for modeling time-to-event data. Our work is motivated by a clinical trial conducted by Tata Translational Cancer Research Center, Kolkata, where 184 cancer patients were treated for the first two years, and then were followed for the next three years. Three biomarkers (lymphocyte count, neutrophil count and platelet count), categorized as normal/abnormal, were measured during the treatment, and the relapse time (if any) was recorded for each patient. Our analysis finds three latent clusters for which the effects of the covariates and the median non-relapse probabilities substantially differ. Through a simulation study we illustrate the effectiveness of the proposed simultaneous clustering and joint modeling.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"830-851"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144621008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-04DOI: 10.1007/s10985-025-09663-0
Lan Wen, Jon A Steingrimsson, Sarah E Robertson, Issa J Dahabreh
Analyses of multi-source data, such as data from multi-center randomized trials, individual participant data meta-analyses, or pooled analyses of observational studies, combine information to estimate an overall average treatment effect. However, if average treatment effects vary across data sources, commonly used approaches for multi-source analyses may not have a clear causal interpretation with respect to a target population of interest. In this paper, we provide identification and estimation of average treatment effects in a target population underlying one of the data sources in a point treatment setting for failure time outcomes potentially subject to right-censoring. We do not assume the absence of effect heterogeneity and hence our results are valid, under certain assumptions, when average treatment effects vary across data sources. We derive the efficient influence functions for source-specific average treatment effects using multi-source data under two different sets of assumptions, and propose a novel doubly robust estimator for our estimand. We evaluate the finite-sample performance of our estimator in simulation studies, and apply our methods to data from the HALT-C multi-center trials.
{"title":"Multi-source analyses of average treatment effects with failure time outcomes.","authors":"Lan Wen, Jon A Steingrimsson, Sarah E Robertson, Issa J Dahabreh","doi":"10.1007/s10985-025-09663-0","DOIUrl":"10.1007/s10985-025-09663-0","url":null,"abstract":"<p><p>Analyses of multi-source data, such as data from multi-center randomized trials, individual participant data meta-analyses, or pooled analyses of observational studies, combine information to estimate an overall average treatment effect. However, if average treatment effects vary across data sources, commonly used approaches for multi-source analyses may not have a clear causal interpretation with respect to a target population of interest. In this paper, we provide identification and estimation of average treatment effects in a target population underlying one of the data sources in a point treatment setting for failure time outcomes potentially subject to right-censoring. We do not assume the absence of effect heterogeneity and hence our results are valid, under certain assumptions, when average treatment effects vary across data sources. We derive the efficient influence functions for source-specific average treatment effects using multi-source data under two different sets of assumptions, and propose a novel doubly robust estimator for our estimand. We evaluate the finite-sample performance of our estimator in simulation studies, and apply our methods to data from the HALT-C multi-center trials.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"869-897"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144561784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-09-22DOI: 10.1007/s10985-025-09668-9
Ariane Cwiling, Vittorio Perduca, Olivier Bouaziz
In the context of right-censored data, we study the problem of predicting the restricted time to event based on a set of covariates. Under a quadratic loss, this problem is equivalent to estimating the conditional restricted mean survival time (RMST). To that aim, we propose a flexible and easy-to-use ensemble algorithm that combines pseudo-observations and super learner. The classical theoretical results of the super learner are extended to right-censored data, using a new definition of pseudo-observations, the so-called split pseudo-observations. Simulation studies indicate that the split pseudo-observations and the standard pseudo-observations are similar even for small sample sizes. The method is applied to maintenance and colon cancer datasets, showing the interest of the method in practice, as compared to other prediction methods. We complement the predictions obtained from our method with our RMST-adapted risk measure, prediction intervals and variable importance measures developed in a previous work.
{"title":"Pseudo-observations and super learner for the estimation of the restricted mean survival time.","authors":"Ariane Cwiling, Vittorio Perduca, Olivier Bouaziz","doi":"10.1007/s10985-025-09668-9","DOIUrl":"10.1007/s10985-025-09668-9","url":null,"abstract":"<p><p>In the context of right-censored data, we study the problem of predicting the restricted time to event based on a set of covariates. Under a quadratic loss, this problem is equivalent to estimating the conditional restricted mean survival time (RMST). To that aim, we propose a flexible and easy-to-use ensemble algorithm that combines pseudo-observations and super learner. The classical theoretical results of the super learner are extended to right-censored data, using a new definition of pseudo-observations, the so-called split pseudo-observations. Simulation studies indicate that the split pseudo-observations and the standard pseudo-observations are similar even for small sample sizes. The method is applied to maintenance and colon cancer datasets, showing the interest of the method in practice, as compared to other prediction methods. We complement the predictions obtained from our method with our RMST-adapted risk measure, prediction intervals and variable importance measures developed in a previous work.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"713-746"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145115022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-28DOI: 10.1007/s10985-025-09669-8
Morten Overgaard
Weighting with the inverse probability of censoring is an approach to deal with censoring in regression analyses where the outcome may be missing due to right-censoring. In this paper, three separate approaches involving this idea in a setting where the Kaplan-Meier estimator is used for estimating the censoring probability are compared. In more detail, the three approaches involve weighted regression, regression with a weighted outcome, and regression of a jack-knife pseudo-observation based on a weighted estimator. Expressions of the asymptotic variances are given in each case and the expressions are compared to each other and to the uncensored case. In terms of low asymptotic variance, a clear winner cannot be found. Which approach will have the lowest asymptotic variance depends on the censoring distribution. Expressions of the limit of the standard sandwich variance estimator in the three cases are also provided, revealing an overestimation under the implied assumptions.
{"title":"A comparison of Kaplan-Meier-based inverse probability of censoring weighted regression methods.","authors":"Morten Overgaard","doi":"10.1007/s10985-025-09669-8","DOIUrl":"10.1007/s10985-025-09669-8","url":null,"abstract":"<p><p>Weighting with the inverse probability of censoring is an approach to deal with censoring in regression analyses where the outcome may be missing due to right-censoring. In this paper, three separate approaches involving this idea in a setting where the Kaplan-Meier estimator is used for estimating the censoring probability are compared. In more detail, the three approaches involve weighted regression, regression with a weighted outcome, and regression of a jack-knife pseudo-observation based on a weighted estimator. Expressions of the asymptotic variances are given in each case and the expressions are compared to each other and to the uncensored case. In terms of low asymptotic variance, a clear winner cannot be found. Which approach will have the lowest asymptotic variance depends on the censoring distribution. Expressions of the limit of the standard sandwich variance estimator in the three cases are also provided, revealing an overestimation under the implied assumptions.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"747-783"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12586238/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145394928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-10-21DOI: 10.1007/s10985-025-09674-x
Annika Strömer, Nadja Klein, Ingrid Van Keilegom, Andreas Mayr
{"title":"Modelling dependent censoring in time-to-event data using boosting copula regression.","authors":"Annika Strömer, Nadja Klein, Ingrid Van Keilegom, Andreas Mayr","doi":"10.1007/s10985-025-09674-x","DOIUrl":"10.1007/s10985-025-09674-x","url":null,"abstract":"","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"994-1016"},"PeriodicalIF":1.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12586418/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145338136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}