Pub Date : 2026-03-18DOI: 10.1007/s10985-025-09675-w
Lola Etiévant, Mitchell H Gail
The original case-cohort design obtains detailed covariate information on a random sample of subjects from the cohort (subcohort) and on the subjects who developed the event of interest (cases). Recently, there was some work on case-cohort estimation of pure risk, i.e., the hypothetical probability that the event occurs, assuming it is the only risk. But competing events can preclude the occurrence of the event of interest, and the pure risk thus overestimates the probability of experiencing the event of interest (absolute risk). Under the cause-specific hazard Cox model, methods for case-cohort inference have been published for relative hazards and cumulative baseline hazards; we have not seen treatments of absolute risk, however. In this work we focus on absolute risk inference under the cause-specific hazard Cox model when using a sample of subjects from the cohort. We propose an influence-based variance estimation formula and consider two sampling designs: (1) a case-cohort with exhaustive sampling of subjects who developed the event of interest or a competing event; and (2) an event-stratified sample of the cohort that only includes fractions of these subjects. Our proposed variance estimate properly accounts for the sampling features and allows appropriate analysis of the sampled data. We illustrate our method and designs in simulation and on the Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial. These analyses also suggest that the "robust" variance originally proposed by Barlow (Biometrics, 50:1064-1072, 1994) may be too large for the absolute risk when using a cohort subsampling design.
{"title":"Inference for cause-specific cox model absolute risk in cohort subsampling designs.","authors":"Lola Etiévant, Mitchell H Gail","doi":"10.1007/s10985-025-09675-w","DOIUrl":"https://doi.org/10.1007/s10985-025-09675-w","url":null,"abstract":"<p><p>The original case-cohort design obtains detailed covariate information on a random sample of subjects from the cohort (subcohort) and on the subjects who developed the event of interest (cases). Recently, there was some work on case-cohort estimation of pure risk, i.e., the hypothetical probability that the event occurs, assuming it is the only risk. But competing events can preclude the occurrence of the event of interest, and the pure risk thus overestimates the probability of experiencing the event of interest (absolute risk). Under the cause-specific hazard Cox model, methods for case-cohort inference have been published for relative hazards and cumulative baseline hazards; we have not seen treatments of absolute risk, however. In this work we focus on absolute risk inference under the cause-specific hazard Cox model when using a sample of subjects from the cohort. We propose an influence-based variance estimation formula and consider two sampling designs: (1) a case-cohort with exhaustive sampling of subjects who developed the event of interest or a competing event; and (2) an event-stratified sample of the cohort that only includes fractions of these subjects. Our proposed variance estimate properly accounts for the sampling features and allows appropriate analysis of the sampled data. We illustrate our method and designs in simulation and on the Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial. These analyses also suggest that the \"robust\" variance originally proposed by Barlow (Biometrics, 50:1064-1072, 1994) may be too large for the absolute risk when using a cohort subsampling design.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 2","pages":""},"PeriodicalIF":1.0,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147482127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-09DOI: 10.1007/s10985-026-09698-x
Ruobing Jia, Yichen Lou, Jianguo Sun, Peijie Wang
Interval-censored competing risks data frequently arise in medical and clinical studies among others and furthermore, the cause of failure may be missing in some situations. In this paper, we consider regression analysis of such data under the framework of an additive subdistribution hazard model and propose a two-step sieve and weighted maximum likelihood estimation procedure. The method explicitly imposes constraints on the cumulative incidence functions to ensure valid survival function estimation and adopts an augmented inverse probability weighting strategy to address the issue of missing event types. Also in the proposed approach, Bernstein polynomials are employed to approximate unknown functions and the proposed estimators are shown to be consistent and asymptotically normal. An extensive simulation study is conducted and indicates that the proposed method works well in practical situations. Finally the proposed approach is applied to the real data from a breast cancer study.
{"title":"Semiparametric regression analysis of interval-censored competing risks data under additive hazards model with missing event types.","authors":"Ruobing Jia, Yichen Lou, Jianguo Sun, Peijie Wang","doi":"10.1007/s10985-026-09698-x","DOIUrl":"https://doi.org/10.1007/s10985-026-09698-x","url":null,"abstract":"<p><p>Interval-censored competing risks data frequently arise in medical and clinical studies among others and furthermore, the cause of failure may be missing in some situations. In this paper, we consider regression analysis of such data under the framework of an additive subdistribution hazard model and propose a two-step sieve and weighted maximum likelihood estimation procedure. The method explicitly imposes constraints on the cumulative incidence functions to ensure valid survival function estimation and adopts an augmented inverse probability weighting strategy to address the issue of missing event types. Also in the proposed approach, Bernstein polynomials are employed to approximate unknown functions and the proposed estimators are shown to be consistent and asymptotically normal. An extensive simulation study is conducted and indicates that the proposed method works well in practical situations. Finally the proposed approach is applied to the real data from a breast cancer study.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 2","pages":""},"PeriodicalIF":1.0,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147391506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To examine the causal effects of time-varying treatments on survival, structural nested cumulative survival time models (SNCSTMs) are flexible and theoretically promising semiparametric models characterized by causally interpretable parameters. One concern is the prerequisite for uniformly scheduled data collection and complete data for time-varying confounders. For example, in pharmacoepidemiological studies using medical information databases, laboratory test results can be missing due to unscheduled hospital visits or non-compliance with health checkups. Furthermore, missing mechanisms data may be non-ignorable and non-monotone, invalidating the typical missing-data methods that assume ignorable or monotone missing mechanisms. We propose a novel g-estimation method for SNCSTMs with non-ignorable, non-monotonic missing data for time-varying confounders. We augment the g-estimation functions using missing probability and imputation models, incorporating a user-defined selection function, which allows sensitivity analyses to evaluate the departure of missing data from ignorable mechanisms. Using a proper selection function, our estimator is doubly robust in the sense that it is consistent if either model for missing probability or imputation of missing data is correct at each time point and if either model for propensity score or conditional expectation of counterfactual counting processes is correct. Moreover, applying frequentist-type multiple imputation yields a closed-form solution for calculating the estimator, even if time-varying confounders are missing. A simulation study evaluated our proposed method's finite sample performance and the estimator's double robustness. We also conducted sensitivity analyses in a pharmacoepidemiological study using a Japanese medical claims database, assessing the risk of hypoglycemia in sulfonylurea-treated patients with incomplete hemoglobin A1c values.
{"title":"Doubly robust g-estimation of structural nested cumulative survival time models with non-ignorable, non-monotone missing data in time-varying confounders.","authors":"Yoshinori Takeuchi, Sho Komukai, Atsushi Goto, Tomohiro Shinozaki","doi":"10.1007/s10985-026-09692-3","DOIUrl":"10.1007/s10985-026-09692-3","url":null,"abstract":"<p><p>To examine the causal effects of time-varying treatments on survival, structural nested cumulative survival time models (SNCSTMs) are flexible and theoretically promising semiparametric models characterized by causally interpretable parameters. One concern is the prerequisite for uniformly scheduled data collection and complete data for time-varying confounders. For example, in pharmacoepidemiological studies using medical information databases, laboratory test results can be missing due to unscheduled hospital visits or non-compliance with health checkups. Furthermore, missing mechanisms data may be non-ignorable and non-monotone, invalidating the typical missing-data methods that assume ignorable or monotone missing mechanisms. We propose a novel g-estimation method for SNCSTMs with non-ignorable, non-monotonic missing data for time-varying confounders. We augment the g-estimation functions using missing probability and imputation models, incorporating a user-defined selection function, which allows sensitivity analyses to evaluate the departure of missing data from ignorable mechanisms. Using a proper selection function, our estimator is doubly robust in the sense that it is consistent if either model for missing probability or imputation of missing data is correct at each time point and if either model for propensity score or conditional expectation of counterfactual counting processes is correct. Moreover, applying frequentist-type multiple imputation yields a closed-form solution for calculating the estimator, even if time-varying confounders are missing. A simulation study evaluated our proposed method's finite sample performance and the estimator's double robustness. We also conducted sensitivity analyses in a pharmacoepidemiological study using a Japanese medical claims database, assessing the risk of hypoglycemia in sulfonylurea-treated patients with incomplete hemoglobin A1c values.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 2","pages":""},"PeriodicalIF":1.0,"publicationDate":"2026-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147357324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-28DOI: 10.1007/s10985-026-09695-0
XiaoDong Zhou, YunJuan Wang, RongXian Yue, Weng Kee Wong
Current methodological research on randomized controlled trial design has predominantly focused on studies with a single primary endpoint. However, many trials in practice involve multiple competing target events. The optimal designs for survival trials with competing target events have not been systematically addressed in the statistical literature. This paper fills this significant gap by developing design methodologies for randomized discrete-time-to-event trials with competing endpoints. We derive the Fisher information matrix for the general discrete-time survival model (DTSM) by transforming the original discrete-time survival data into proper multinomial responses. By introducing a cost-based generalized [Formula: see text]-optimal design criterion, we identify various types of optimal designs for estimating the treatment effects. Under the assumption of a parametric competing risks model for the underlying survival process, we demonstrate that the optimal treatment allocation scheme is critically influenced by the parameter values within this model. Our methodology is applied to the redesign of the SANAD trial, which examines withdrawal times from anti-epileptic drugs, thereby highlighting the advantages of our optimal design strategies. A key finding is that assigning subjects equally to the different groups in a two-arm DTSM trial with competing risks is generally a favorable choice, unless the hazard rates over the duration of the trial in both groups are low.
{"title":"Optimal designs for discrete-time survival models with competing risks.","authors":"XiaoDong Zhou, YunJuan Wang, RongXian Yue, Weng Kee Wong","doi":"10.1007/s10985-026-09695-0","DOIUrl":"10.1007/s10985-026-09695-0","url":null,"abstract":"<p><p>Current methodological research on randomized controlled trial design has predominantly focused on studies with a single primary endpoint. However, many trials in practice involve multiple competing target events. The optimal designs for survival trials with competing target events have not been systematically addressed in the statistical literature. This paper fills this significant gap by developing design methodologies for randomized discrete-time-to-event trials with competing endpoints. We derive the Fisher information matrix for the general discrete-time survival model (DTSM) by transforming the original discrete-time survival data into proper multinomial responses. By introducing a cost-based generalized [Formula: see text]-optimal design criterion, we identify various types of optimal designs for estimating the treatment effects. Under the assumption of a parametric competing risks model for the underlying survival process, we demonstrate that the optimal treatment allocation scheme is critically influenced by the parameter values within this model. Our methodology is applied to the redesign of the SANAD trial, which examines withdrawal times from anti-epileptic drugs, thereby highlighting the advantages of our optimal design strategies. A key finding is that assigning subjects equally to the different groups in a two-arm DTSM trial with competing risks is generally a favorable choice, unless the hazard rates over the duration of the trial in both groups are low.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 2","pages":""},"PeriodicalIF":1.0,"publicationDate":"2026-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12950086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147318894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.1007/s10985-026-09694-1
Gizel Bakicierler Sezer, Ufuk Beyaztas
Survival analysis with functional covariates has emerged as an important extension of the classical Cox proportional hazards model, allowing one to assess how entire trajectories or curves influence time-to-event outcomes. However, existing functional Cox models are typically fitted using non-robust techniques and can be highly sensitive to outliers or aberrant observations in the data. In this paper, we propose a robust functional Cox regression model that addresses this limitation. The proposed methodology combines a projection-pursuit-based robust functional principal component analysis with robust Cox regression estimation in a finite-dimensional subspace. By adopting the robust functional principal component analysis approach for dimension reduction, we obtain principal components that resist the influence of outlying functional observations. Then, a robust partial likelihood approach which additionally downweights the effects of outliers is used to estimate the parameters of a Cox regression model constructed using the robust functional principal components and scalar covariates. We establish the asymptotic properties of the proposed estimator, including Fisher consistency, [Formula: see text]-consistency, and asymptotic normality, under a set of mild and practically verifiable regularity conditions. Furthermore, we derive and analyze the influence function to assess the robustness characteristics of the estimator. Through an extensive Monte Carlo simulation study, we provide compelling evidence that the proposed method outperforms classical functional linear Cox regression and penalized functional regression techniques, particularly in the presence of outliers. We further demonstrate the proposed method's effectiveness using accelerometry-based survival data from the National Health and Nutrition Examination Survey. Our method has been implemented in the [Formula: see text] package.
{"title":"Robust functional Cox regression model.","authors":"Gizel Bakicierler Sezer, Ufuk Beyaztas","doi":"10.1007/s10985-026-09694-1","DOIUrl":"10.1007/s10985-026-09694-1","url":null,"abstract":"<p><p>Survival analysis with functional covariates has emerged as an important extension of the classical Cox proportional hazards model, allowing one to assess how entire trajectories or curves influence time-to-event outcomes. However, existing functional Cox models are typically fitted using non-robust techniques and can be highly sensitive to outliers or aberrant observations in the data. In this paper, we propose a robust functional Cox regression model that addresses this limitation. The proposed methodology combines a projection-pursuit-based robust functional principal component analysis with robust Cox regression estimation in a finite-dimensional subspace. By adopting the robust functional principal component analysis approach for dimension reduction, we obtain principal components that resist the influence of outlying functional observations. Then, a robust partial likelihood approach which additionally downweights the effects of outliers is used to estimate the parameters of a Cox regression model constructed using the robust functional principal components and scalar covariates. We establish the asymptotic properties of the proposed estimator, including Fisher consistency, [Formula: see text]-consistency, and asymptotic normality, under a set of mild and practically verifiable regularity conditions. Furthermore, we derive and analyze the influence function to assess the robustness characteristics of the estimator. Through an extensive Monte Carlo simulation study, we provide compelling evidence that the proposed method outperforms classical functional linear Cox regression and penalized functional regression techniques, particularly in the presence of outliers. We further demonstrate the proposed method's effectiveness using accelerometry-based survival data from the National Health and Nutrition Examination Survey. Our method has been implemented in the [Formula: see text] package.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147272812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-17DOI: 10.1007/s10985-025-09681-y
Erik T Parner
In clinical research, estimating the average treatment effect is a common goal. However, when treatment effects vary substantially across individuals, it is often more informative to evaluate the treatment effect within subgroups. This paper focuses on causal inference for a duration outcome in a principal stratum-defined as the subgroup of individuals who would experience a positive duration under one treatment. Motivated by the Danish Vulva Cancer Recurrence Study (DaVulvaRec), which compares intensive versus standard follow-up in women treated for vulvar cancer, we examine the effect of intensive follow-up on the time with a cancer recurrence diagnosis. The principal stratum is in this example women who would be diagnosed with cancer recurrence under the intensive follow-up. We present a framework for identifying and estimating the average treatment effect in the principal stratum under a monotonicity assumption and introduce a sensitivity parameter to evaluate the impact of potential violations of this assumption. Using a multi-state model with pseudo-observations, we account for censoring and demonstrate that this approach offers greater statistical power than conventional comparisons between treatment groups. We illustrate the methodology to sample size calculation, the final analysis of the DaVulvaRec study using a simulated data set and an application to data from a randomized study on colon cancer.
{"title":"Estimating treatment effects on duration with disease: a principal stratification framework.","authors":"Erik T Parner","doi":"10.1007/s10985-025-09681-y","DOIUrl":"10.1007/s10985-025-09681-y","url":null,"abstract":"<p><p>In clinical research, estimating the average treatment effect is a common goal. However, when treatment effects vary substantially across individuals, it is often more informative to evaluate the treatment effect within subgroups. This paper focuses on causal inference for a duration outcome in a principal stratum-defined as the subgroup of individuals who would experience a positive duration under one treatment. Motivated by the Danish Vulva Cancer Recurrence Study (DaVulvaRec), which compares intensive versus standard follow-up in women treated for vulvar cancer, we examine the effect of intensive follow-up on the time with a cancer recurrence diagnosis. The principal stratum is in this example women who would be diagnosed with cancer recurrence under the intensive follow-up. We present a framework for identifying and estimating the average treatment effect in the principal stratum under a monotonicity assumption and introduce a sensitivity parameter to evaluate the impact of potential violations of this assumption. Using a multi-state model with pseudo-observations, we account for censoring and demonstrate that this approach offers greater statistical power than conventional comparisons between treatment groups. We illustrate the methodology to sample size calculation, the final analysis of the DaVulvaRec study using a simulated data set and an application to data from a randomized study on colon cancer.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"15"},"PeriodicalIF":1.0,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12913301/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146214810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-09DOI: 10.1007/s10985-026-09693-2
Bang Wang, Zi Wang, Yu Cheng
The time-to-first-event analysis is often used for studies involving multiple event times, where each component is treated equally, regardless of their clinical importance. Alternative summaries such as Win Ratio, Net Benefit, and Win Odds (WO) have drawn attention lately because they can handle different types of outcomes and allow for a hierarchical ordering in component outcomes. In this paper, we focus on WO and propose proportional WO regression models to evaluate the treatment effect on multiple outcomes while controlling for other risk factors. The models are easily interpretable as a standard logistic regression model. However, the proposed WO regression is more advanced; multiple outcomes of different types can be modeled together, and the estimating equation is constructed based on all possible and potentially dependent pairings of a treated individual with a control one under the functional response modeling framework. In addition, informative ties are carefully distinguished from those inconclusive comparisons due to censoring, and the latter is handled via the inverse probability of censoring weighting method. We establish the asymptotic properties of the estimated regression coefficients using the U-statistic theory and demonstrate the finite sample performance through numerical studies.
{"title":"Generalized win-odds regression models for composite endpoints.","authors":"Bang Wang, Zi Wang, Yu Cheng","doi":"10.1007/s10985-026-09693-2","DOIUrl":"10.1007/s10985-026-09693-2","url":null,"abstract":"<p><p>The time-to-first-event analysis is often used for studies involving multiple event times, where each component is treated equally, regardless of their clinical importance. Alternative summaries such as Win Ratio, Net Benefit, and Win Odds (WO) have drawn attention lately because they can handle different types of outcomes and allow for a hierarchical ordering in component outcomes. In this paper, we focus on WO and propose proportional WO regression models to evaluate the treatment effect on multiple outcomes while controlling for other risk factors. The models are easily interpretable as a standard logistic regression model. However, the proposed WO regression is more advanced; multiple outcomes of different types can be modeled together, and the estimating equation is constructed based on all possible and potentially dependent pairings of a treated individual with a control one under the functional response modeling framework. In addition, informative ties are carefully distinguished from those inconclusive comparisons due to censoring, and the latter is handled via the inverse probability of censoring weighting method. We establish the asymptotic properties of the estimated regression coefficients using the U-statistic theory and demonstrate the finite sample performance through numerical studies.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"13"},"PeriodicalIF":1.0,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146143989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-09DOI: 10.1007/s10985-026-09689-y
Qiyue Huang, Anyin Feng, Qiang Wu, Xingwei Tong
This study develops estimation methods for a deep partially linear Cox proportional hazards model with a change point under current status data, aiming to accommodate complex change-point effects. Prior work has largely relied on linear models, which may inadequately capture relationships among multivariate covariates and thus hinder accurate change-point detection. To address this, we use a deep neural network to model covariate effects within the Cox framework and propose a maximum likelihood estimation procedure for the model. We establish asymptotic properties of the resulting estimators, including consistency, asymptotic independence, and semiparametric efficiency. Simulation studies indicate that the proposed inference procedure performs well in finite samples. An analysis of a breast cancer dataset is provided to illustrate the methodology.
{"title":"Deep learning for the change-point Cox model with current status data.","authors":"Qiyue Huang, Anyin Feng, Qiang Wu, Xingwei Tong","doi":"10.1007/s10985-026-09689-y","DOIUrl":"10.1007/s10985-026-09689-y","url":null,"abstract":"<p><p>This study develops estimation methods for a deep partially linear Cox proportional hazards model with a change point under current status data, aiming to accommodate complex change-point effects. Prior work has largely relied on linear models, which may inadequately capture relationships among multivariate covariates and thus hinder accurate change-point detection. To address this, we use a deep neural network to model covariate effects within the Cox framework and propose a maximum likelihood estimation procedure for the model. We establish asymptotic properties of the resulting estimators, including consistency, asymptotic independence, and semiparametric efficiency. Simulation studies indicate that the proposed inference procedure performs well in finite samples. An analysis of a breast cancer dataset is provided to illustrate the methodology.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"14"},"PeriodicalIF":1.0,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12886198/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146143984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-31DOI: 10.1007/s10985-026-09691-4
Wen Su, Changyu Liu, Guosheng Yin, Jian Huang
Current status data are commonly encountered in modern medicine, econometrics and social science. Its unique characteristics pose significant challenges to the analysis of such data and the existing methods often suffer grave consequences when the underlying model is misspecified. To address these difficulties, we propose a model-free two-stage generative approach for estimating the conditional cumulative distribution function given predictors. We first learn a conditional generator nonparametrically for the joint conditional distribution of observation times and event status, and then construct the nonparametric maximum likelihood estimators of conditional distribution functions based on samples from the conditional generator. Subsequently, we study the convergence properties of the proposed estimator and establish its consistency. Simulation studies under various settings show the superior performance of the deep conditional generative approach over the classical modeling approaches and an application to Parvovirus B19 seroprevalence data yields reasonable predictions.
{"title":"Wasserstein GAN-based estimation for conditional distribution function with current status data.","authors":"Wen Su, Changyu Liu, Guosheng Yin, Jian Huang","doi":"10.1007/s10985-026-09691-4","DOIUrl":"10.1007/s10985-026-09691-4","url":null,"abstract":"<p><p>Current status data are commonly encountered in modern medicine, econometrics and social science. Its unique characteristics pose significant challenges to the analysis of such data and the existing methods often suffer grave consequences when the underlying model is misspecified. To address these difficulties, we propose a model-free two-stage generative approach for estimating the conditional cumulative distribution function given predictors. We first learn a conditional generator nonparametrically for the joint conditional distribution of observation times and event status, and then construct the nonparametric maximum likelihood estimators of conditional distribution functions based on samples from the conditional generator. Subsequently, we study the convergence properties of the proposed estimator and establish its consistency. Simulation studies under various settings show the superior performance of the deep conditional generative approach over the classical modeling approaches and an application to Parvovirus B19 seroprevalence data yields reasonable predictions.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"12"},"PeriodicalIF":1.0,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12858558/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-31DOI: 10.1007/s10985-026-09688-z
Na Bo, Ying Ding
Estimating heterogeneous treatment effects (HTE) for survival outcomes has gained increasing attention in precision medicine, as it captures variations in treatment efficacy among patients or subgroups. However, most existing methods conduct post-hoc subgroup identifications rather than simultaneously estimating HTE and identifying causal subgroups. In this paper, we propose an interpretable HTE estimation framework that integrates meta-learners with tree-based methods to estimate the conditional average treatment effect (CATE) for survival outcomes and identify predictive subgroups simultaneously. We evaluated the performance of our method through extensive simulation studies. We also demonstrated its application in a large randomized controlled trial (RCT) for age-related macular degeneration (AMD), a progressive polygenic eye disease, to estimate the HTE of an antioxidant and mineral supplement on time-to-AMD progression and to identify genetically defined subgroups with enhanced treatment effects. Our method offers a direct interpretation of the estimated HTE and provides evidence to support precision healthcare.
{"title":"Estimation of the interpretable heterogeneous treatment effect with causal subgroup discovery in survival outcomes.","authors":"Na Bo, Ying Ding","doi":"10.1007/s10985-026-09688-z","DOIUrl":"10.1007/s10985-026-09688-z","url":null,"abstract":"<p><p>Estimating heterogeneous treatment effects (HTE) for survival outcomes has gained increasing attention in precision medicine, as it captures variations in treatment efficacy among patients or subgroups. However, most existing methods conduct post-hoc subgroup identifications rather than simultaneously estimating HTE and identifying causal subgroups. In this paper, we propose an interpretable HTE estimation framework that integrates meta-learners with tree-based methods to estimate the conditional average treatment effect (CATE) for survival outcomes and identify predictive subgroups simultaneously. We evaluated the performance of our method through extensive simulation studies. We also demonstrated its application in a large randomized controlled trial (RCT) for age-related macular degeneration (AMD), a progressive polygenic eye disease, to estimate the HTE of an antioxidant and mineral supplement on time-to-AMD progression and to identify genetically defined subgroups with enhanced treatment effects. Our method offers a direct interpretation of the estimated HTE and provides evidence to support precision healthcare.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"32 1","pages":"11"},"PeriodicalIF":1.0,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}