Pub Date : 2025-07-01Epub Date: 2025-05-23DOI: 10.1007/s10985-025-09656-z
Yufeng Xia, Yangkuo Li, Xiaobing Zhao, Xuan Xu
To investigate pairwise interactions arising from recurrent event processes in a longitudinal network, the framework of the stochastic block model is followed, where every node belongs to a latent group and interactions between node pairs from two specified groups follow a conditional nonhomogeneous Poisson process. Our focus lies on discrete observation times, which are commonly encountered in reality for cost-saving purposes. The variational EM algorithm and variational maximum likelihood estimation are applied for statistical inference. A specific method based on the defined distribution function F and self-consistency algorithm for recurrent events is used when estimating the intensity functions of edges. Numerical simulations illustrate the performance of our proposed estimation procedure in uncovering the underlying structure in the longitudinal networks with recurrent event processes. The dataset of interactions between French schoolchildren for influenza monitoring is analyzed.
{"title":"Investigating network structures in recurrent event data with discrete observation times.","authors":"Yufeng Xia, Yangkuo Li, Xiaobing Zhao, Xuan Xu","doi":"10.1007/s10985-025-09656-z","DOIUrl":"10.1007/s10985-025-09656-z","url":null,"abstract":"<p><p>To investigate pairwise interactions arising from recurrent event processes in a longitudinal network, the framework of the stochastic block model is followed, where every node belongs to a latent group and interactions between node pairs from two specified groups follow a conditional nonhomogeneous Poisson process. Our focus lies on discrete observation times, which are commonly encountered in reality for cost-saving purposes. The variational EM algorithm and variational maximum likelihood estimation are applied for statistical inference. A specific method based on the defined distribution function F and self-consistency algorithm for recurrent events is used when estimating the intensity functions of edges. Numerical simulations illustrate the performance of our proposed estimation procedure in uncovering the underlying structure in the longitudinal networks with recurrent event processes. The dataset of interactions between French schoolchildren for influenza monitoring is analyzed.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"543-573"},"PeriodicalIF":1.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-06-25DOI: 10.1007/s10985-025-09660-3
Seoyoon Cho, Matthew A Psioda, Joseph G Ibrahim
We propose a joint model for multiple time-to-event outcomes where the outcomes have a cure structure. When a subset of a population is not susceptible to an event of interest, traditional survival models cannot accommodate this type of phenomenon. For example, for patients with melanoma, certain modern treatment options can reduce the mortality and relapse rates. Traditional survival models assume the entire population is at risk for the event of interest, i.e., has a non-zero hazard at all times. However, cure rate models allow a portion of the population to be risk-free of the event of interest. Our proposed model uses a novel truncated Gaussian copula to jointly model bivariate time-to-event outcomes of this type. In oncology studies, multiple time-to-event outcomes (e.g., overall survival and relapse-free or progression-free survival) are typically of interest. Therefore, multivariate methods to analyze time-to-event outcomes with a cure structure are potentially of great utility. We formulate a joint model directly on the time-to-event outcomes (i.e., unconditional on whether an individual is cured or not). Dependency between the time-to-event outcomes is modeled via the correlation matrix of the truncated Gaussian copula. A Markov Chain Monte Carlo procedure is proposed for model fitting. Simulation studies and a real data analysis using a melanoma clinical trial data are presented to illustrate the performance of the method and the proposed model is compared to independent models.
{"title":"Bayesian bivariate cure rate models using Gaussian copulas.","authors":"Seoyoon Cho, Matthew A Psioda, Joseph G Ibrahim","doi":"10.1007/s10985-025-09660-3","DOIUrl":"10.1007/s10985-025-09660-3","url":null,"abstract":"<p><p>We propose a joint model for multiple time-to-event outcomes where the outcomes have a cure structure. When a subset of a population is not susceptible to an event of interest, traditional survival models cannot accommodate this type of phenomenon. For example, for patients with melanoma, certain modern treatment options can reduce the mortality and relapse rates. Traditional survival models assume the entire population is at risk for the event of interest, i.e., has a non-zero hazard at all times. However, cure rate models allow a portion of the population to be risk-free of the event of interest. Our proposed model uses a novel truncated Gaussian copula to jointly model bivariate time-to-event outcomes of this type. In oncology studies, multiple time-to-event outcomes (e.g., overall survival and relapse-free or progression-free survival) are typically of interest. Therefore, multivariate methods to analyze time-to-event outcomes with a cure structure are potentially of great utility. We formulate a joint model directly on the time-to-event outcomes (i.e., unconditional on whether an individual is cured or not). Dependency between the time-to-event outcomes is modeled via the correlation matrix of the truncated Gaussian copula. A Markov Chain Monte Carlo procedure is proposed for model fitting. Simulation studies and a real data analysis using a melanoma clinical trial data are presented to illustrate the performance of the method and the proposed model is compared to independent models.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"658-673"},"PeriodicalIF":1.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144486712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-07-28DOI: 10.1007/s10985-025-09659-w
Marina T Dietrich, Dennis Dobler, Mathisca C M de Gunst
The wild bootstrap is a popular resampling method in the context of time-to-event data analysis. Previous works established the large sample properties of it for applications to different estimators and test statistics. It can be used to justify the accuracy of inference procedures such as hypothesis tests or time-simultaneous confidence bands. This paper provides a general framework for establishing large sample properties in a unified way by using martingale structures. This framework includes most of the well-known parametric, semiparametric and nonparametric statistical methods in time-to-event analysis. Along the way of proving the validity of the wild bootstrap, a new variant of Rebolledo's martingale central limit theorem for counting process-based martingales is developed as well.
{"title":"Wild bootstrap for counting process-based statistics: a martingale theory-based approach.","authors":"Marina T Dietrich, Dennis Dobler, Mathisca C M de Gunst","doi":"10.1007/s10985-025-09659-w","DOIUrl":"10.1007/s10985-025-09659-w","url":null,"abstract":"<p><p>The wild bootstrap is a popular resampling method in the context of time-to-event data analysis. Previous works established the large sample properties of it for applications to different estimators and test statistics. It can be used to justify the accuracy of inference procedures such as hypothesis tests or time-simultaneous confidence bands. This paper provides a general framework for establishing large sample properties in a unified way by using martingale structures. This framework includes most of the well-known parametric, semiparametric and nonparametric statistical methods in time-to-event analysis. Along the way of proving the validity of the wild bootstrap, a new variant of Rebolledo's martingale central limit theorem for counting process-based martingales is developed as well.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"631-657"},"PeriodicalIF":1.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12317882/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-07-16DOI: 10.1007/s10985-025-09661-2
Jih-Chang Yu, Yu-Jen Cheng
In this study, we investigate estimation and variable selection for semiparametric transformation models with length-biased survival data-a special case of left truncation commonly encountered in the social sciences and cancer prevention trials. To correct for sampling bias, conventional methods such as conditional likelihood, martingale estimating equations, and composite likelihood have been proposed. However, these methods may be less efficient due to their reliance on only partial information from the full likelihood. In contrast, we adopt a full-likelihood approach under the semiparametric transformation model and propose a unified and more efficient nonparametric maximum likelihood estimator (NPMLE). To perform variable selection, we incorporate an adaptive least absolute shrinkage and selection operator (ALASSO) penalty into the full likelihood. We show that when the NPMLE is used as the initial value, the resulting one-step ALASSO estimator-offering a simplified version of the Newton-Raphson method-achieves oracle properties. Theoretical properties of the proposed methods are established using empirical process techniques. The performance of the methods is evaluated through simulation studies and illustrated with a real data application.
{"title":"Estimation and variable selection for semiparametric transformation models with length-biased survival data.","authors":"Jih-Chang Yu, Yu-Jen Cheng","doi":"10.1007/s10985-025-09661-2","DOIUrl":"10.1007/s10985-025-09661-2","url":null,"abstract":"<p><p>In this study, we investigate estimation and variable selection for semiparametric transformation models with length-biased survival data-a special case of left truncation commonly encountered in the social sciences and cancer prevention trials. To correct for sampling bias, conventional methods such as conditional likelihood, martingale estimating equations, and composite likelihood have been proposed. However, these methods may be less efficient due to their reliance on only partial information from the full likelihood. In contrast, we adopt a full-likelihood approach under the semiparametric transformation model and propose a unified and more efficient nonparametric maximum likelihood estimator (NPMLE). To perform variable selection, we incorporate an adaptive least absolute shrinkage and selection operator (ALASSO) penalty into the full likelihood. We show that when the NPMLE is used as the initial value, the resulting one-step ALASSO estimator-offering a simplified version of the Newton-Raphson method-achieves oracle properties. Theoretical properties of the proposed methods are established using empirical process techniques. The performance of the methods is evaluated through simulation studies and illustrated with a real data application.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"674-701"},"PeriodicalIF":1.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144643970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-06-03DOI: 10.1007/s10985-025-09657-y
Sin-Ho Jung
In a typical individually randomized group-treatment (IRGT) trial, subjects are randomized between a control arm and an experimental arm. While the subjects randomized to the control arm are treated individually, those in the experimental arm are assigned to one of clusters for group treatment. By sharing some common frailties, the outcomes of subjects in the same groups tend to be dependent, whereas those in the control arm are independent. In this paper, we consider IRGT trials with time to event outcomes. We modify the two-sample log-rank test to compare the survival data from TRGT trials, and derive its sample size formula. The proposed sample size formula requires specification of marginal survival distributions for the two arms, bivariate survival distribution and cluster size distribution for the experimental arm, and accrual period or accrual rate together with additional follow-up period. In a sample size calculation, either the cluster sizes are given and the number of clusters is calculated or the number of clusters is given at the time of study open and the required accrual period to determine the cluster sizes is calculated. Simulations and a real data example show that the proposed test statistic controls the type I error rate and the formula provides accurately powered sample sizes. Also proposed are optimal designs minimizing the total sample size or the total cost when the cost per subject is different between two treatment arms.
{"title":"Design and analysis of individually randomized group-treatment trials with time to event outcomes.","authors":"Sin-Ho Jung","doi":"10.1007/s10985-025-09657-y","DOIUrl":"10.1007/s10985-025-09657-y","url":null,"abstract":"<p><p>In a typical individually randomized group-treatment (IRGT) trial, subjects are randomized between a control arm and an experimental arm. While the subjects randomized to the control arm are treated individually, those in the experimental arm are assigned to one of clusters for group treatment. By sharing some common frailties, the outcomes of subjects in the same groups tend to be dependent, whereas those in the control arm are independent. In this paper, we consider IRGT trials with time to event outcomes. We modify the two-sample log-rank test to compare the survival data from TRGT trials, and derive its sample size formula. The proposed sample size formula requires specification of marginal survival distributions for the two arms, bivariate survival distribution and cluster size distribution for the experimental arm, and accrual period or accrual rate together with additional follow-up period. In a sample size calculation, either the cluster sizes are given and the number of clusters is calculated or the number of clusters is given at the time of study open and the required accrual period to determine the cluster sizes is calculated. Simulations and a real data example show that the proposed test statistic controls the type I error rate and the formula provides accurately powered sample sizes. Also proposed are optimal designs minimizing the total sample size or the total cost when the cost per subject is different between two treatment arms.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"574-594"},"PeriodicalIF":1.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-03-16DOI: 10.1007/s10985-025-09647-0
Myrthe D'Haen, Ingrid Van Keilegom, Anneleen Verhasselt
The study of survival data often requires taking proper care of the censoring mechanism that prohibits complete observation of the data. Under right censoring, only the first occurring event is observed: either the event of interest, or a competing event like withdrawal of a subject from the study. The corresponding identifiability difficulties led many authors to imposing (conditional) independence or a fully known dependence between survival and censoring times, both of which are not always realistic. However, recent results in survival literature showed that parametric copula models allow identification of all model parameters, including the association parameter, under appropriately chosen marginal distributions. The present paper is the first one to apply such models in a quantile regression context, hence benefiting from its well-known advantages in terms of e.g. robustness and richer inference results. The parametric copula is supplemented with a likewise parametric, yet flexible, enriched asymmetric Laplace distribution for the survival times conditional on the covariates. Its asymmetric Laplace basis provides its close connection to quantiles, while the extension with Laguerre orthogonal polynomials ensures sufficient flexibility for increasing polynomial degrees. The distributional flavour of the quantile regression presented, comes with advantages of both theoretical and computational nature. All model parameters are proven to be identifiable, consistent, and asymptotically normal. Finally, performance of the model and of the proposed estimation procedure is assessed through extensive simulation studies as well as an application on liver transplant data.
{"title":"Quantile regression under dependent censoring with unknown association.","authors":"Myrthe D'Haen, Ingrid Van Keilegom, Anneleen Verhasselt","doi":"10.1007/s10985-025-09647-0","DOIUrl":"10.1007/s10985-025-09647-0","url":null,"abstract":"<p><p>The study of survival data often requires taking proper care of the censoring mechanism that prohibits complete observation of the data. Under right censoring, only the first occurring event is observed: either the event of interest, or a competing event like withdrawal of a subject from the study. The corresponding identifiability difficulties led many authors to imposing (conditional) independence or a fully known dependence between survival and censoring times, both of which are not always realistic. However, recent results in survival literature showed that parametric copula models allow identification of all model parameters, including the association parameter, under appropriately chosen marginal distributions. The present paper is the first one to apply such models in a quantile regression context, hence benefiting from its well-known advantages in terms of e.g. robustness and richer inference results. The parametric copula is supplemented with a likewise parametric, yet flexible, enriched asymmetric Laplace distribution for the survival times conditional on the covariates. Its asymmetric Laplace basis provides its close connection to quantiles, while the extension with Laguerre orthogonal polynomials ensures sufficient flexibility for increasing polynomial degrees. The distributional flavour of the quantile regression presented, comes with advantages of both theoretical and computational nature. All model parameters are proven to be identifiable, consistent, and asymptotically normal. Finally, performance of the model and of the proposed estimation procedure is assessed through extensive simulation studies as well as an application on liver transplant data.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"253-299"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143639770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-04-05DOI: 10.1007/s10985-025-09648-z
Clara Bertinelli Salucci, Azzeddine Bakdi, Ingrid Kristine Glad, Bo Henry Lindqvist, Erik Vanem, Riccardo De Bin
In the context of time-to-event analysis, First hitting time methods consider the event occurrence as the ending point of some evolving process. The characteristics of the process are of great relevance for the analysis, which makes this class of models interesting and particularly suitable for applications where something about the degradation path is known. In cases where the degradation can only worsen, a monotonic process is the most suitable choice. This paper proposes a boosting algorithm for first hitting time models based on an underlying homogeneous gamma process to account for the monotonicity of the degradation trend. The predictive power and versatility of the algorithm are shown with real data examples from both engineering and biomedical applications, as well as with simulated examples.
{"title":"Lifetime analysis with monotonic degradation: a boosted first hitting time model based on a homogeneous gamma process.","authors":"Clara Bertinelli Salucci, Azzeddine Bakdi, Ingrid Kristine Glad, Bo Henry Lindqvist, Erik Vanem, Riccardo De Bin","doi":"10.1007/s10985-025-09648-z","DOIUrl":"10.1007/s10985-025-09648-z","url":null,"abstract":"<p><p>In the context of time-to-event analysis, First hitting time methods consider the event occurrence as the ending point of some evolving process. The characteristics of the process are of great relevance for the analysis, which makes this class of models interesting and particularly suitable for applications where something about the degradation path is known. In cases where the degradation can only worsen, a monotonic process is the most suitable choice. This paper proposes a boosting algorithm for first hitting time models based on an underlying homogeneous gamma process to account for the monotonicity of the degradation trend. The predictive power and versatility of the algorithm are shown with real data examples from both engineering and biomedical applications, as well as with simulated examples.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"300-339"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12043765/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143789376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-04-26DOI: 10.1007/s10985-025-09653-2
Cecilia Castro, Marta Azevedo, Víctor Leiva, Luís Meira-Machado
We propose a new goodness-of-fit procedure designed to verify the reciprocal property that characterizes the fatigue-life or Birnbaum-Saunders (BS) distribution. Under this property, scaling a random variable that takes positive values by its median results in the same distribution as its reciprocal, a feature frequently encountered in reliability and survival studies. Our procedure employs total time on test (TTT) curves to compare the behavior of the observed data and its reciprocal counterpart, capturing both local and global discrepancies through supremum- and area-based statistics. We establish the theoretical validity of these statistics under mild assumptions, showing that they deliver accurate inference for moderate to large samples. Simulation evidence indicates that our TTT-based procedures are sensitive to subtle departures from log-symmetry, particularly when the distribution underlying the data has heavier or lighter tails than the assumed one. Illustrative real data examples further reveal how overlooking deviations from the reciprocal property can distort reliability estimates and predictions of failure times, showing the practical importance of the new goodness-of-fit procedure. Overall, our findings strengthen the BS framework and provide robust tools for model validation and selection when log-symmetric modeling assumptions are in place.
{"title":"Total time on test-based goodness-of-fit statistics for the reciprocal property in fatigue-life models.","authors":"Cecilia Castro, Marta Azevedo, Víctor Leiva, Luís Meira-Machado","doi":"10.1007/s10985-025-09653-2","DOIUrl":"https://doi.org/10.1007/s10985-025-09653-2","url":null,"abstract":"<p><p>We propose a new goodness-of-fit procedure designed to verify the reciprocal property that characterizes the fatigue-life or Birnbaum-Saunders (BS) distribution. Under this property, scaling a random variable that takes positive values by its median results in the same distribution as its reciprocal, a feature frequently encountered in reliability and survival studies. Our procedure employs total time on test (TTT) curves to compare the behavior of the observed data and its reciprocal counterpart, capturing both local and global discrepancies through supremum- and area-based statistics. We establish the theoretical validity of these statistics under mild assumptions, showing that they deliver accurate inference for moderate to large samples. Simulation evidence indicates that our TTT-based procedures are sensitive to subtle departures from log-symmetry, particularly when the distribution underlying the data has heavier or lighter tails than the assumed one. Illustrative real data examples further reveal how overlooking deviations from the reciprocal property can distort reliability estimates and predictions of failure times, showing the practical importance of the new goodness-of-fit procedure. Overall, our findings strengthen the BS framework and provide robust tools for model validation and selection when log-symmetric modeling assumptions are in place.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"31 2","pages":"422-441"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-03-31DOI: 10.1007/s10985-025-09649-y
Cunjin Zhao, Peijie Wang, Jianguo Sun
Double truncation commonly occurs in astronomy, epidemiology and economics. Compared to one-sided truncation, double truncation, which combines both left and right truncation, is more challenging to handle and the methods for analyzing doubly truncated data are limited. For the situation, a common approach is to perform conditional analysis conditional on truncation times, which is simple but may not be efficient. Corresponding to this, we propose a pairwise pseudo-likelihood approach that aims to recover some information missed in the conditional methods and can yield more efficient estimation. The resulting estimator is shown to be consistent and asymptotically normal. An extensive simulation study indicates that the proposed procedure works well in practice and is indeed more efficient than the conditional approach. The proposed methodology applied to an AIDS study.
{"title":"A pairwise pseudo-likelihood approach for regression analysis of doubly truncated data.","authors":"Cunjin Zhao, Peijie Wang, Jianguo Sun","doi":"10.1007/s10985-025-09649-y","DOIUrl":"10.1007/s10985-025-09649-y","url":null,"abstract":"<p><p>Double truncation commonly occurs in astronomy, epidemiology and economics. Compared to one-sided truncation, double truncation, which combines both left and right truncation, is more challenging to handle and the methods for analyzing doubly truncated data are limited. For the situation, a common approach is to perform conditional analysis conditional on truncation times, which is simple but may not be efficient. Corresponding to this, we propose a pairwise pseudo-likelihood approach that aims to recover some information missed in the conditional methods and can yield more efficient estimation. The resulting estimator is shown to be consistent and asymptotically normal. An extensive simulation study indicates that the proposed procedure works well in practice and is indeed more efficient than the conditional approach. The proposed methodology applied to an AIDS study.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"340-363"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143755562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-03-04DOI: 10.1007/s10985-025-09646-1
Marija Cuparić, Bojana Milošević
Here we revisit a goodness-of-fit testing problem for randomly right-censored data in the presence of cured subjects, i.e. the population consists of two parts: the cured or non-susceptible group, who will never experience the event of interest versus those who will undergo the event of interest when followed up sufficiently long. We consider the modifications of proposed characterization-based goodness-of-fit tests for the exponential distribution constructed via the inverse probability of censoring weighted U- or V-approach. We present their asymptotic properties and extend our discussion to encompass suitable generalizations applicable to a variety of tests formulated using the same methodology. A comparative power study of these proposed tests against a recent CvM-based competitor and the modifications of the most prominent competitors identified in prior studies that did not consider the presence of cured subjects, demonstrates good finite sample performance. Novel tests are illustrated on a real dataset related to leukemia relapse.
{"title":"Goodness-of-fit testing in the presence of cured data: IPCW approach.","authors":"Marija Cuparić, Bojana Milošević","doi":"10.1007/s10985-025-09646-1","DOIUrl":"10.1007/s10985-025-09646-1","url":null,"abstract":"<p><p>Here we revisit a goodness-of-fit testing problem for randomly right-censored data in the presence of cured subjects, i.e. the population consists of two parts: the cured or non-susceptible group, who will never experience the event of interest versus those who will undergo the event of interest when followed up sufficiently long. We consider the modifications of proposed characterization-based goodness-of-fit tests for the exponential distribution constructed via the inverse probability of censoring weighted U- or V-approach. We present their asymptotic properties and extend our discussion to encompass suitable generalizations applicable to a variety of tests formulated using the same methodology. A comparative power study of these proposed tests against a recent CvM-based competitor and the modifications of the most prominent competitors identified in prior studies that did not consider the presence of cured subjects, demonstrates good finite sample performance. Novel tests are illustrated on a real dataset related to leukemia relapse.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":" ","pages":"233-252"},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}