Pub Date : 2026-02-09DOI: 10.1007/s10985-026-09693-2
Bang Wang, Zi Wang, Yu Cheng
The time-to-first-event analysis is often used for studies involving multiple event times, where each component is treated equally, regardless of their clinical importance. Alternative summaries such as Win Ratio, Net Benefit, and Win Odds (WO) have drawn attention lately because they can handle different types of outcomes and allow for a hierarchical ordering in component outcomes. In this paper, we focus on WO and propose proportional WO regression models to evaluate the treatment effect on multiple outcomes while controlling for other risk factors. The models are easily interpretable as a standard logistic regression model. However, the proposed WO regression is more advanced; multiple outcomes of different types can be modeled together, and the estimating equation is constructed based on all possible and potentially dependent pairings of a treated individual with a control one under the functional response modeling framework. In addition, informative ties are carefully distinguished from those inconclusive comparisons due to censoring, and the latter is handled via the inverse probability of censoring weighting method. We establish the asymptotic properties of the estimated regression coefficients using the U-statistic theory and demonstrate the finite sample performance through numerical studies.
{"title":"Generalized win-odds regression models for composite endpoints.","authors":"Bang Wang, Zi Wang, Yu Cheng","doi":"10.1007/s10985-026-09693-2","DOIUrl":"https://doi.org/10.1007/s10985-026-09693-2","url":null,"abstract":"<p><p>The time-to-first-event analysis is often used for studies involving multiple event times, where each component is treated equally, regardless of their clinical importance. Alternative summaries such as Win Ratio, Net Benefit, and Win Odds (WO) have drawn attention lately because they can handle different types of outcomes and allow for a hierarchical ordering in component outcomes. In this paper, we focus on WO and propose proportional WO regression models to evaluate the treatment effect on multiple outcomes while controlling for other risk factors. The models are easily interpretable as a standard logistic regression model. However, the proposed WO regression is more advanced; multiple outcomes of different types can be modeled together, and the estimating equation is constructed based on all possible and potentially dependent pairings of a treated individual with a control one under the functional response modeling framework. In addition, informative ties are carefully distinguished from those inconclusive comparisons due to censoring, and the latter is handled via the inverse probability of censoring weighting method. We establish the asymptotic properties of the estimated regression coefficients using the U-statistic theory and demonstrate the finite sample performance through numerical studies.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"13"},"PeriodicalIF":1.0,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146143989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-09DOI: 10.1007/s10985-026-09689-y
Qiyue Huang, Anyin Feng, Qiang Wu, Xingwei Tong
This study develops estimation methods for a deep partially linear Cox proportional hazards model with a change point under current status data, aiming to accommodate complex change-point effects. Prior work has largely relied on linear models, which may inadequately capture relationships among multivariate covariates and thus hinder accurate change-point detection. To address this, we use a deep neural network to model covariate effects within the Cox framework and propose a maximum likelihood estimation procedure for the model. We establish asymptotic properties of the resulting estimators, including consistency, asymptotic independence, and semiparametric efficiency. Simulation studies indicate that the proposed inference procedure performs well in finite samples. An analysis of a breast cancer dataset is provided to illustrate the methodology.
{"title":"Deep learning for the change-point Cox model with current status data.","authors":"Qiyue Huang, Anyin Feng, Qiang Wu, Xingwei Tong","doi":"10.1007/s10985-026-09689-y","DOIUrl":"https://doi.org/10.1007/s10985-026-09689-y","url":null,"abstract":"<p><p>This study develops estimation methods for a deep partially linear Cox proportional hazards model with a change point under current status data, aiming to accommodate complex change-point effects. Prior work has largely relied on linear models, which may inadequately capture relationships among multivariate covariates and thus hinder accurate change-point detection. To address this, we use a deep neural network to model covariate effects within the Cox framework and propose a maximum likelihood estimation procedure for the model. We establish asymptotic properties of the resulting estimators, including consistency, asymptotic independence, and semiparametric efficiency. Simulation studies indicate that the proposed inference procedure performs well in finite samples. An analysis of a breast cancer dataset is provided to illustrate the methodology.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"14"},"PeriodicalIF":1.0,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146143984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-31DOI: 10.1007/s10985-026-09691-4
Wen Su, Changyu Liu, Guosheng Yin, Jian Huang
Current status data are commonly encountered in modern medicine, econometrics and social science. Its unique characteristics pose significant challenges to the analysis of such data and the existing methods often suffer grave consequences when the underlying model is misspecified. To address these difficulties, we propose a model-free two-stage generative approach for estimating the conditional cumulative distribution function given predictors. We first learn a conditional generator nonparametrically for the joint conditional distribution of observation times and event status, and then construct the nonparametric maximum likelihood estimators of conditional distribution functions based on samples from the conditional generator. Subsequently, we study the convergence properties of the proposed estimator and establish its consistency. Simulation studies under various settings show the superior performance of the deep conditional generative approach over the classical modeling approaches and an application to Parvovirus B19 seroprevalence data yields reasonable predictions.
{"title":"Wasserstein GAN-based estimation for conditional distribution function with current status data.","authors":"Wen Su, Changyu Liu, Guosheng Yin, Jian Huang","doi":"10.1007/s10985-026-09691-4","DOIUrl":"10.1007/s10985-026-09691-4","url":null,"abstract":"<p><p>Current status data are commonly encountered in modern medicine, econometrics and social science. Its unique characteristics pose significant challenges to the analysis of such data and the existing methods often suffer grave consequences when the underlying model is misspecified. To address these difficulties, we propose a model-free two-stage generative approach for estimating the conditional cumulative distribution function given predictors. We first learn a conditional generator nonparametrically for the joint conditional distribution of observation times and event status, and then construct the nonparametric maximum likelihood estimators of conditional distribution functions based on samples from the conditional generator. Subsequently, we study the convergence properties of the proposed estimator and establish its consistency. Simulation studies under various settings show the superior performance of the deep conditional generative approach over the classical modeling approaches and an application to Parvovirus B19 seroprevalence data yields reasonable predictions.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"12"},"PeriodicalIF":1.0,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12858558/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-31DOI: 10.1007/s10985-026-09688-z
Na Bo, Ying Ding
Estimating heterogeneous treatment effects (HTE) for survival outcomes has gained increasing attention in precision medicine, as it captures variations in treatment efficacy among patients or subgroups. However, most existing methods conduct post-hoc subgroup identifications rather than simultaneously estimating HTE and identifying causal subgroups. In this paper, we propose an interpretable HTE estimation framework that integrates meta-learners with tree-based methods to estimate the conditional average treatment effect (CATE) for survival outcomes and identify predictive subgroups simultaneously. We evaluated the performance of our method through extensive simulation studies. We also demonstrated its application in a large randomized controlled trial (RCT) for age-related macular degeneration (AMD), a progressive polygenic eye disease, to estimate the HTE of an antioxidant and mineral supplement on time-to-AMD progression and to identify genetically defined subgroups with enhanced treatment effects. Our method offers a direct interpretation of the estimated HTE and provides evidence to support precision healthcare.
{"title":"Estimation of the interpretable heterogeneous treatment effect with causal subgroup discovery in survival outcomes.","authors":"Na Bo, Ying Ding","doi":"10.1007/s10985-026-09688-z","DOIUrl":"https://doi.org/10.1007/s10985-026-09688-z","url":null,"abstract":"<p><p>Estimating heterogeneous treatment effects (HTE) for survival outcomes has gained increasing attention in precision medicine, as it captures variations in treatment efficacy among patients or subgroups. However, most existing methods conduct post-hoc subgroup identifications rather than simultaneously estimating HTE and identifying causal subgroups. In this paper, we propose an interpretable HTE estimation framework that integrates meta-learners with tree-based methods to estimate the conditional average treatment effect (CATE) for survival outcomes and identify predictive subgroups simultaneously. We evaluated the performance of our method through extensive simulation studies. We also demonstrated its application in a large randomized controlled trial (RCT) for age-related macular degeneration (AMD), a progressive polygenic eye disease, to estimate the HTE of an antioxidant and mineral supplement on time-to-AMD progression and to identify genetically defined subgroups with enhanced treatment effects. Our method offers a direct interpretation of the estimated HTE and provides evidence to support precision healthcare.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"11"},"PeriodicalIF":1.0,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1007/s10985-026-09687-0
David Oakes
In this anniversary issue I briefly review some work on the notion of collapsibility and indicate some lingering questions.
在这期周年纪念中,我简要回顾了一些关于可折叠性概念的工作,并指出了一些悬而未决的问题。
{"title":"On Multiple Time Scales and Collapsibility.","authors":"David Oakes","doi":"10.1007/s10985-026-09687-0","DOIUrl":"https://doi.org/10.1007/s10985-026-09687-0","url":null,"abstract":"<p><p>In this anniversary issue I briefly review some work on the notion of collapsibility and indicate some lingering questions.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"9"},"PeriodicalIF":1.0,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1007/s10985-026-09690-5
Tong Wu, Jiawen Hu, Zhi-Sheng Ye, Nan Chen
High-dimensional data with left-censored responses are increasingly common in modern applications, yet existing methods for analyzing them are limited. Classical Tobit models fail to handle nonlinear relationships or perform high-dimensional variable selection, whereas deep learning approaches often prioritize prediction performance but lack selection and interpretation capabilities. To address this gap, we propose an integrated deep learning framework, the Deep Tobit model, which employs the negative Tobit log-likelihood as its loss function to properly account for data censoring. A two-stage feature selection algorithm is further developed, with theoretical guarantees on convergence rate and selection consistency. Extensive simulation studies and real-data applications on left-censored aero-engine casing vibration data and HIV viral load data demonstrate that the proposed framework outperforms several state-of-the-art baselines in both variable selection and prediction accuracy.
{"title":"Deep tobit model: an integrated framework for high-dimensional censored regression with variable selection.","authors":"Tong Wu, Jiawen Hu, Zhi-Sheng Ye, Nan Chen","doi":"10.1007/s10985-026-09690-5","DOIUrl":"https://doi.org/10.1007/s10985-026-09690-5","url":null,"abstract":"<p><p>High-dimensional data with left-censored responses are increasingly common in modern applications, yet existing methods for analyzing them are limited. Classical Tobit models fail to handle nonlinear relationships or perform high-dimensional variable selection, whereas deep learning approaches often prioritize prediction performance but lack selection and interpretation capabilities. To address this gap, we propose an integrated deep learning framework, the Deep Tobit model, which employs the negative Tobit log-likelihood as its loss function to properly account for data censoring. A two-stage feature selection algorithm is further developed, with theoretical guarantees on convergence rate and selection consistency. Extensive simulation studies and real-data applications on left-censored aero-engine casing vibration data and HIV viral load data demonstrate that the proposed framework outperforms several state-of-the-art baselines in both variable selection and prediction accuracy.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"10"},"PeriodicalIF":1.0,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1007/s10985-025-09676-9
Ina Dormuth, Carolin Herrmann, Frank Konietschke, Markus Pauly, Matthias Wirth, Marc Ditzhaus
When comparing multiple groups in clinical trials, we are not only interested in whether there is a difference between any groups but rather where the difference is. Such research questions lead to testing multiple individual hypotheses. To control the familywise error rate (FWER), we must apply some corrections or introduce tests that control the FWER by design. In the case of time-to-event data, a Bonferroni-corrected log-rank test is commonly used. This approach has two significant drawbacks: (i) it loses power when the proportional hazards assumption is violated and (ii) the correction generally leads to a lower power, especially when the test statistics are not independent. We propose two new tests based on combined weighted log-rank tests. One is a simple multiple contrast test of weighted log-rank tests, and one is an extension of the so-called CASANOVA test. The latter was introduced for factorial designs. We propose a new multiple contrast test based on the CASANOVA approach. Our test shows promise of being more powerful under crossing hazards and eliminates the need for additional p-value correction. We assess the performance of our tests through extensive Monte Carlo simulation studies covering both proportional and non-proportional hazard scenarios. Finally, we apply the new and reference methods to a real-world data example. The new approaches control the FWER and show reasonable power in all scenarios. They outperform the adjusted approaches in some non-proportional settings in terms of power.
{"title":"Beyond Bonferroni: new multiple contrast tests for time-to-event data under non-proportional hazards.","authors":"Ina Dormuth, Carolin Herrmann, Frank Konietschke, Markus Pauly, Matthias Wirth, Marc Ditzhaus","doi":"10.1007/s10985-025-09676-9","DOIUrl":"10.1007/s10985-025-09676-9","url":null,"abstract":"<p><p>When comparing multiple groups in clinical trials, we are not only interested in whether there is a difference between any groups but rather where the difference is. Such research questions lead to testing multiple individual hypotheses. To control the familywise error rate (FWER), we must apply some corrections or introduce tests that control the FWER by design. In the case of time-to-event data, a Bonferroni-corrected log-rank test is commonly used. This approach has two significant drawbacks: (i) it loses power when the proportional hazards assumption is violated and (ii) the correction generally leads to a lower power, especially when the test statistics are not independent. We propose two new tests based on combined weighted log-rank tests. One is a simple multiple contrast test of weighted log-rank tests, and one is an extension of the so-called CASANOVA test. The latter was introduced for factorial designs. We propose a new multiple contrast test based on the CASANOVA approach. Our test shows promise of being more powerful under crossing hazards and eliminates the need for additional p-value correction. We assess the performance of our tests through extensive Monte Carlo simulation studies covering both proportional and non-proportional hazard scenarios. Finally, we apply the new and reference methods to a real-world data example. The new approaches control the FWER and show reasonable power in all scenarios. They outperform the adjusted approaches in some non-proportional settings in terms of power.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"8"},"PeriodicalIF":1.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12804333/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1007/s10985-025-09685-8
Qin Yu, Xin Zhou, Jia Zhou, Zemin Zheng
In high-dimensional survival analysis, sparse learning is critically important, as evidenced by applications in molecular biology, economics, and climate science. Despite rapid advances on sparse modeling of survival data, achieving valid statistical inference under measurement errors remains largely unexplored. In this article, we introduce a new method called the double debiased Lasso (DDL) for constructing confidence intervals in high-dimensional error-in-variables accelerated failure time (AFT) models. It not only corrects the bias of an initial weighted least squares Lasso estimate by inverting the Karush-Kuhn-Tucker (KKT) conditions, but also alleviates the impact of measurement errors when estimating both the initial estimator and the inverse covariance matrix by using the nearest positive semi-definite projection technique. Furthermore, we establish comprehensive theoretical properties, including the asymptotic normality of the proposed DDL estimator, as well as estimation consistency for the initial estimator. The effectiveness of our method is demonstrated through numerical studies and real-data analysis.
{"title":"Confidence intervals for high-dimensional accelerated failure time models under measurement errors.","authors":"Qin Yu, Xin Zhou, Jia Zhou, Zemin Zheng","doi":"10.1007/s10985-025-09685-8","DOIUrl":"https://doi.org/10.1007/s10985-025-09685-8","url":null,"abstract":"<p><p>In high-dimensional survival analysis, sparse learning is critically important, as evidenced by applications in molecular biology, economics, and climate science. Despite rapid advances on sparse modeling of survival data, achieving valid statistical inference under measurement errors remains largely unexplored. In this article, we introduce a new method called the double debiased Lasso (DDL) for constructing confidence intervals in high-dimensional error-in-variables accelerated failure time (AFT) models. It not only corrects the bias of an initial weighted least squares Lasso estimate by inverting the Karush-Kuhn-Tucker (KKT) conditions, but also alleviates the impact of measurement errors when estimating both the initial estimator and the inverse covariance matrix by using the nearest positive semi-definite projection technique. Furthermore, we establish comprehensive theoretical properties, including the asymptotic normality of the proposed DDL estimator, as well as estimation consistency for the initial estimator. The effectiveness of our method is demonstrated through numerical studies and real-data analysis.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"7"},"PeriodicalIF":1.0,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145901395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1007/s10985-025-09680-z
Thomas Harder Scheike
We consider semiparametric random-effects models for recurrent events in the presence of a terminal event. The recurrent events have either a proportional marginal rate model (Cox in J Roy Stat Soc Ser B 34:406-424, 1972) or a proportional marginal mean model (Ghosh and Lin in Stat Sin 34: 663-688, 2002), while the marginal rate of the terminal event is given by a proportional model. The dependency between the recurrent events and the terminal event is described by two variants of random effects models that allow the processes to share the random effect, either fully or partly. The models are formulated as two-stage models, where the marginals can be fitted in an initial stage, and then subsequently random effects parameters can be estimated. The estimation of parameters does not require the choice of any tuning parameters, in contrast to procedures based on numerical integration, and the numerical procedure works well. Standard errors were computed by bootstrapping. The methods are applied to the Taichung Peritoneal Dialysis Study (Chen et al. in Biom J 57(2):215-233, 2015) that considered recurrent inflammations in dialysis patients.
我们考虑在存在终端事件的情况下重复事件的半参数随机效应模型。重复事件有比例边际率模型(Cox in J Roy Stat Soc Ser B 34:406-424, 1972)或比例边际平均模型(Ghosh and Lin in Stat Sin 34: 663-688, 2002),而终端事件的边际率由比例模型给出。重复事件和最终事件之间的依赖关系由随机效应模型的两种变体来描述,这两种变体允许过程完全或部分地共享随机效应。模型采用两阶段模型,在初始阶段可以拟合边际,随后可以估计随机效应参数。与基于数值积分的过程相比,参数估计不需要选择任何调谐参数,并且数值过程效果很好。采用自举法计算标准误差。这些方法应用于台中腹膜透析研究(Chen et al. in Biom J 57(2):215- 233,2015),该研究考虑了透析患者的复发性炎症。
{"title":"Two-stage recurrent events random effects models.","authors":"Thomas Harder Scheike","doi":"10.1007/s10985-025-09680-z","DOIUrl":"https://doi.org/10.1007/s10985-025-09680-z","url":null,"abstract":"<p><p>We consider semiparametric random-effects models for recurrent events in the presence of a terminal event. The recurrent events have either a proportional marginal rate model (Cox in J Roy Stat Soc Ser B 34:406-424, 1972) or a proportional marginal mean model (Ghosh and Lin in Stat Sin 34: 663-688, 2002), while the marginal rate of the terminal event is given by a proportional model. The dependency between the recurrent events and the terminal event is described by two variants of random effects models that allow the processes to share the random effect, either fully or partly. The models are formulated as two-stage models, where the marginals can be fitted in an initial stage, and then subsequently random effects parameters can be estimated. The estimation of parameters does not require the choice of any tuning parameters, in contrast to procedures based on numerical integration, and the numerical procedure works well. Standard errors were computed by bootstrapping. The methods are applied to the Taichung Peritoneal Dialysis Study (Chen et al. in Biom J 57(2):215-233, 2015) that considered recurrent inflammations in dialysis patients.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"6"},"PeriodicalIF":1.0,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1007/s10985-025-09686-7
Daphné Aurouet, Valentin Patilea
Motivated by the need to analyze continuously updated data sets in the context of time-to-event modeling, we propose a promising and practically feasible nonparametric approach to estimate the conditional hazard function given a set of continuous and discrete predictors. The method is based on a representation of the conditional hazard as a ratio between a joint density and a conditional expectation determined by the distribution of the observed variables. It is shown that such ratio representations are available for uni- and bivariate time-to-events, in the presence of common types of random censoring, truncation, and with possibly cured individuals, as well as for competing risks. This opens the door to nonparametric approaches in many time-to-event predictive models. To estimate joint densities and conditional expectations we propose the recursive kernel smoothing, which is well suited for online estimation. Asymptotic results for such estimators are derived and it is shown that they achieve optimal convergence rates. Simulation experiments show the good finite sample performance of our recursive estimator with right censoring. The method is applied to a real dataset of primary breast cancer.
{"title":"Continuously updated estimation of conditional hazard functions.","authors":"Daphné Aurouet, Valentin Patilea","doi":"10.1007/s10985-025-09686-7","DOIUrl":"10.1007/s10985-025-09686-7","url":null,"abstract":"<p><p>Motivated by the need to analyze continuously updated data sets in the context of time-to-event modeling, we propose a promising and practically feasible nonparametric approach to estimate the conditional hazard function given a set of continuous and discrete predictors. The method is based on a representation of the conditional hazard as a ratio between a joint density and a conditional expectation determined by the distribution of the observed variables. It is shown that such ratio representations are available for uni- and bivariate time-to-events, in the presence of common types of random censoring, truncation, and with possibly cured individuals, as well as for competing risks. This opens the door to nonparametric approaches in many time-to-event predictive models. To estimate joint densities and conditional expectations we propose the recursive kernel smoothing, which is well suited for online estimation. Asymptotic results for such estimators are derived and it is shown that they achieve optimal convergence rates. Simulation experiments show the good finite sample performance of our recursive estimator with right censoring. The method is applied to a real dataset of primary breast cancer.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"5"},"PeriodicalIF":1.0,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145821854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}