Pub Date : 2023-07-01DOI: 10.1007/s10985-022-09587-z
Marie Böhnstedt, Jutta Gampe, Monique A A Caljouw, Hein Putter
In studies of recurrent events, joint modeling approaches are often needed to allow for potential dependent censoring by a terminal event such as death. Joint frailty models for recurrent events and death with an additional dependence parameter have been studied for cases in which individuals are observed from the start of the event processes. However, samples are often selected at a later time, which results in delayed entry so that only individuals who have not yet experienced the terminal event will be included. In joint frailty models such left truncation has effects on the frailty distribution that need to be accounted for in both the recurrence process and the terminal event process, if the two are associated. We demonstrate, in a comprehensive simulation study, the effects that not adjusting for late entry can have and derive the correctly adjusted marginal likelihood, which can be expressed as a ratio of two integrals over the frailty distribution. We extend the estimation method of Liu and Huang (Stat Med 27:2665-2683, 2008. https://doi.org/10.1002/sim.3077 ) to include potential left truncation. Numerical integration is performed by Gaussian quadrature, the baseline intensities are specified as piecewise constant functions, potential covariates are assumed to have multiplicative effects on the intensities. We apply the method to estimate age-specific intensities of recurrent urinary tract infections and mortality in an older population.
在反复事件的研究中,通常需要联合建模方法,以允许潜在的依赖于死亡等终端事件的审查。对于从事件过程开始就观察到个体的情况,研究了带有附加依赖参数的复发事件和死亡的联合脆弱性模型。然而,样本通常是在稍后的时间选择的,这导致延迟进入,因此只有尚未经历过终端事件的个体将被包括在内。在联合脆弱性模型中,这种左截断对脆弱性分布有影响,如果在复发过程和终止事件过程中两者都有关联,则需要考虑这种影响。在全面的模拟研究中,我们证明了不调整晚进入的影响,并推导出正确调整的边际似然,它可以表示为脆弱性分布上两个积分的比率。我们推广了Liu和Huang (Stat Med 27:2665-2683, 2008)的估计方法。https://doi.org/10.1002/sim.3077)包括潜在的左截断。采用高斯正交法进行数值积分,将基线强度指定为分段常数函数,假设潜在协变量对强度具有乘法效应。我们应用该方法来估计老年人群中复发性尿路感染的年龄特异性强度和死亡率。
{"title":"Incorporating delayed entry into the joint frailty model for recurrent events and a terminal event.","authors":"Marie Böhnstedt, Jutta Gampe, Monique A A Caljouw, Hein Putter","doi":"10.1007/s10985-022-09587-z","DOIUrl":"https://doi.org/10.1007/s10985-022-09587-z","url":null,"abstract":"<p><p>In studies of recurrent events, joint modeling approaches are often needed to allow for potential dependent censoring by a terminal event such as death. Joint frailty models for recurrent events and death with an additional dependence parameter have been studied for cases in which individuals are observed from the start of the event processes. However, samples are often selected at a later time, which results in delayed entry so that only individuals who have not yet experienced the terminal event will be included. In joint frailty models such left truncation has effects on the frailty distribution that need to be accounted for in both the recurrence process and the terminal event process, if the two are associated. We demonstrate, in a comprehensive simulation study, the effects that not adjusting for late entry can have and derive the correctly adjusted marginal likelihood, which can be expressed as a ratio of two integrals over the frailty distribution. We extend the estimation method of Liu and Huang (Stat Med 27:2665-2683, 2008. https://doi.org/10.1002/sim.3077 ) to include potential left truncation. Numerical integration is performed by Gaussian quadrature, the baseline intensities are specified as piecewise constant functions, potential covariates are assumed to have multiplicative effects on the intensities. We apply the method to estimate age-specific intensities of recurrent urinary tract infections and mortality in an older population.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 3","pages":"585-607"},"PeriodicalIF":1.3,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9615411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1007/s10985-022-09584-2
Peng Liu, Kwun Chuen Gary Chan, Ying Qing Chen
Retrospective sampling can be useful in epidemiological research for its convenience to explore an etiological association. One particular retrospective sampling is that disease outcomes of the time-to-event type are collected subject to right truncation, along with other covariates of interest. For regression analysis of the right-truncated time-to-event data, the so-called proportional reverse-time hazards model has been proposed, but the interpretation of its regression parameters tends to be cumbersome, which has greatly hampered its application in practice. In this paper, we instead consider the proportional odds model, an appealing alternative to the popular proportional hazards model. Under the proportional odds model, there is an embedded relationship between the reverse-time hazard function and the usual hazard function. Building on this relationship, we provide a simple procedure to estimate the regression parameters in the proportional odds model for the right truncated data. Weighted estimations are also studied.
{"title":"On a simple estimation of the proportional odds model under right truncation.","authors":"Peng Liu, Kwun Chuen Gary Chan, Ying Qing Chen","doi":"10.1007/s10985-022-09584-2","DOIUrl":"https://doi.org/10.1007/s10985-022-09584-2","url":null,"abstract":"<p><p>Retrospective sampling can be useful in epidemiological research for its convenience to explore an etiological association. One particular retrospective sampling is that disease outcomes of the time-to-event type are collected subject to right truncation, along with other covariates of interest. For regression analysis of the right-truncated time-to-event data, the so-called proportional reverse-time hazards model has been proposed, but the interpretation of its regression parameters tends to be cumbersome, which has greatly hampered its application in practice. In this paper, we instead consider the proportional odds model, an appealing alternative to the popular proportional hazards model. Under the proportional odds model, there is an embedded relationship between the reverse-time hazard function and the usual hazard function. Building on this relationship, we provide a simple procedure to estimate the regression parameters in the proportional odds model for the right truncated data. Weighted estimations are also studied.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 3","pages":"537-554"},"PeriodicalIF":1.3,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10258175/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9614963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1007/s10985-023-09593-9
Yichen Lou, Peijie Wang, Jianguo Sun
The case-cohort design was developed to reduce costs when disease incidence is low and covariates are difficult to obtain. However, most of the existing methods are for right-censored data and there exists only limited research on interval-censored data, especially on regression analysis of bivariate interval-censored data. Interval-censored failure time data frequently occur in many areas and a large literature on their analyses has been established. In this paper, we discuss the situation of bivariate interval-censored data arising from case-cohort studies. For the problem, a class of semiparametric transformation frailty models is presented and for inference, a sieve weighted likelihood approach is developed. The large sample properties, including the consistency of the proposed estimators and the asymptotic normality of the regression parameter estimators, are established. Moreover, a simulation is conducted to assess the finite sample performance of the proposed method and suggests that it performs well in practice.
{"title":"A semi-parametric weighted likelihood approach for regression analysis of bivariate interval-censored outcomes from case-cohort studies.","authors":"Yichen Lou, Peijie Wang, Jianguo Sun","doi":"10.1007/s10985-023-09593-9","DOIUrl":"https://doi.org/10.1007/s10985-023-09593-9","url":null,"abstract":"<p><p>The case-cohort design was developed to reduce costs when disease incidence is low and covariates are difficult to obtain. However, most of the existing methods are for right-censored data and there exists only limited research on interval-censored data, especially on regression analysis of bivariate interval-censored data. Interval-censored failure time data frequently occur in many areas and a large literature on their analyses has been established. In this paper, we discuss the situation of bivariate interval-censored data arising from case-cohort studies. For the problem, a class of semiparametric transformation frailty models is presented and for inference, a sieve weighted likelihood approach is developed. The large sample properties, including the consistency of the proposed estimators and the asymptotic normality of the regression parameter estimators, are established. Moreover, a simulation is conducted to assess the finite sample performance of the proposed method and suggests that it performs well in practice.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 3","pages":"628-653"},"PeriodicalIF":1.3,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9615456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1007/s10985-023-09597-5
Erik T Parner, Per K Andersen, Morten Overgaard
Jack-knife pseudo-observations have in recent decades gained popularity in regression analysis for various aspects of time-to-event data. A limitation of the jack-knife pseudo-observations is that their computation is time consuming, as the base estimate needs to be recalculated when leaving out each observation. We show that jack-knife pseudo-observations can be closely approximated using the idea of the infinitesimal jack-knife residuals. The infinitesimal jack-knife pseudo-observations are much faster to compute than jack-knife pseudo-observations. A key assumption of the unbiasedness of the jack-knife pseudo-observation approach is on the influence function of the base estimate. We reiterate why the condition on the influence function is needed for unbiased inference and show that the condition is not satisfied for the Kaplan-Meier base estimate in a left-truncated cohort. We present a modification of the infinitesimal jack-knife pseudo-observations that provide unbiased estimates in a left-truncated cohort. The computational speed and medium and large sample properties of the jack-knife pseudo-observations and infinitesimal jack-knife pseudo-observation are compared and we present an application of the modified infinitesimal jack-knife pseudo-observations in a left-truncated cohort of Danish patients with diabetes.
{"title":"Regression models for censored time-to-event data using infinitesimal jack-knife pseudo-observations, with applications to left-truncation.","authors":"Erik T Parner, Per K Andersen, Morten Overgaard","doi":"10.1007/s10985-023-09597-5","DOIUrl":"https://doi.org/10.1007/s10985-023-09597-5","url":null,"abstract":"<p><p>Jack-knife pseudo-observations have in recent decades gained popularity in regression analysis for various aspects of time-to-event data. A limitation of the jack-knife pseudo-observations is that their computation is time consuming, as the base estimate needs to be recalculated when leaving out each observation. We show that jack-knife pseudo-observations can be closely approximated using the idea of the infinitesimal jack-knife residuals. The infinitesimal jack-knife pseudo-observations are much faster to compute than jack-knife pseudo-observations. A key assumption of the unbiasedness of the jack-knife pseudo-observation approach is on the influence function of the base estimate. We reiterate why the condition on the influence function is needed for unbiased inference and show that the condition is not satisfied for the Kaplan-Meier base estimate in a left-truncated cohort. We present a modification of the infinitesimal jack-knife pseudo-observations that provide unbiased estimates in a left-truncated cohort. The computational speed and medium and large sample properties of the jack-knife pseudo-observations and infinitesimal jack-knife pseudo-observation are compared and we present an application of the modified infinitesimal jack-knife pseudo-observations in a left-truncated cohort of Danish patients with diabetes.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 3","pages":"654-671"},"PeriodicalIF":1.3,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10258172/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9622679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1007/s10985-022-09585-1
Zhiqiang Tan
For discrete-time survival data, conditional likelihood inference in Cox's hazard odds model is theoretically desirable but exact calculation is numerical intractable with a moderate to large number of tied events. Unconditional maximum likelihood estimation over both regression coefficients and baseline hazard probabilities can be problematic with a large number of time intervals. We develop new methods and theory using numerically simple estimating functions, along with model-based and model-robust variance estimation, in hazard probability and odds models. For the probability hazard model, we derive as a consistent estimator the Breslow-Peto estimator, previously known as an approximation to the conditional likelihood estimator in the hazard odds model. For the hazard odds model, we propose a weighted Mantel-Haenszel estimator, which satisfies conditional unbiasedness given the numbers of events in addition to the risk sets and covariates, similarly to the conditional likelihood estimator. Our methods are expected to perform satisfactorily in a broad range of settings, with small or large numbers of tied events corresponding to a large or small number of time intervals. The methods are implemented in the R package dSurvival.
{"title":"Consistent and robust inference in hazard probability and odds models with discrete-time survival data.","authors":"Zhiqiang Tan","doi":"10.1007/s10985-022-09585-1","DOIUrl":"https://doi.org/10.1007/s10985-022-09585-1","url":null,"abstract":"<p><p>For discrete-time survival data, conditional likelihood inference in Cox's hazard odds model is theoretically desirable but exact calculation is numerical intractable with a moderate to large number of tied events. Unconditional maximum likelihood estimation over both regression coefficients and baseline hazard probabilities can be problematic with a large number of time intervals. We develop new methods and theory using numerically simple estimating functions, along with model-based and model-robust variance estimation, in hazard probability and odds models. For the probability hazard model, we derive as a consistent estimator the Breslow-Peto estimator, previously known as an approximation to the conditional likelihood estimator in the hazard odds model. For the hazard odds model, we propose a weighted Mantel-Haenszel estimator, which satisfies conditional unbiasedness given the numbers of events in addition to the risk sets and covariates, similarly to the conditional likelihood estimator. Our methods are expected to perform satisfactorily in a broad range of settings, with small or large numbers of tied events corresponding to a large or small number of time intervals. The methods are implemented in the R package dSurvival.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 3","pages":"555-584"},"PeriodicalIF":1.3,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9613466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01Epub Date: 2023-03-08DOI: 10.1007/s10985-023-09591-x
Wende Clarence Safari, Ignacio López-de-Ullibarri, María Amalia Jácome
This paper addresses the problem of estimating the conditional survival function of the lifetime of the subjects experiencing the event (latency) in the mixture cure model when the cure status information is partially available. The approach of past work relies on the assumption that long-term survivors are unidentifiable because of right censoring. However, in some cases this assumption is invalid since some subjects are known to be cured, e.g., when a medical test ascertains that a disease has entirely disappeared after treatment. We propose a latency estimator that extends the nonparametric estimator studied in López-Cheda et al. (TEST 26(2):353-376, 2017b) to the case when the cure status is partially available. We establish the asymptotic normality distribution of the estimator, and illustrate its performance in a simulation study. Finally, the estimator is applied to a medical dataset to study the length of hospital stay of COVID-19 patients requiring intensive care.
{"title":"Latency function estimation under the mixture cure model when the cure status is available.","authors":"Wende Clarence Safari, Ignacio López-de-Ullibarri, María Amalia Jácome","doi":"10.1007/s10985-023-09591-x","DOIUrl":"10.1007/s10985-023-09591-x","url":null,"abstract":"<p><p>This paper addresses the problem of estimating the conditional survival function of the lifetime of the subjects experiencing the event (latency) in the mixture cure model when the cure status information is partially available. The approach of past work relies on the assumption that long-term survivors are unidentifiable because of right censoring. However, in some cases this assumption is invalid since some subjects are known to be cured, e.g., when a medical test ascertains that a disease has entirely disappeared after treatment. We propose a latency estimator that extends the nonparametric estimator studied in López-Cheda et al. (TEST 26(2):353-376, 2017b) to the case when the cure status is partially available. We establish the asymptotic normality distribution of the estimator, and illustrate its performance in a simulation study. Finally, the estimator is applied to a medical dataset to study the length of hospital stay of COVID-19 patients requiring intensive care.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 3","pages":"608-627"},"PeriodicalIF":1.2,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9994787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9619729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1007/s10985-023-09596-6
Tianyi Lu, Shuwei Li, Liuquan Sun
Interval-censored failure time data arise commonly in various scientific studies where the failure time of interest is only known to lie in a certain time interval rather than observed exactly. In addition, left truncation on the failure event may occur and can greatly complicate the statistical analysis. In this paper, we investigate regression analysis of left-truncated and interval-censored data with the commonly used additive hazards model. Specifically, we propose a conditional estimating equation approach for the estimation, and further improve its estimation efficiency by combining the conditional estimating equation and the pairwise pseudo-score-based estimating equation that can eliminate the nuisance functions from the marginal likelihood of the truncation times. Asymptotic properties of the proposed estimators are discussed including the consistency and asymptotic normality. Extensive simulation studies are conducted to evaluate the empirical performance of the proposed methods, and suggest that the combined estimating equation approach is obviously more efficient than the conditional estimating equation approach. We then apply the proposed methods to a set of real data for illustration.
{"title":"Combined estimating equation approaches for the additive hazards model with left-truncated and interval-censored data.","authors":"Tianyi Lu, Shuwei Li, Liuquan Sun","doi":"10.1007/s10985-023-09596-6","DOIUrl":"https://doi.org/10.1007/s10985-023-09596-6","url":null,"abstract":"<p><p>Interval-censored failure time data arise commonly in various scientific studies where the failure time of interest is only known to lie in a certain time interval rather than observed exactly. In addition, left truncation on the failure event may occur and can greatly complicate the statistical analysis. In this paper, we investigate regression analysis of left-truncated and interval-censored data with the commonly used additive hazards model. Specifically, we propose a conditional estimating equation approach for the estimation, and further improve its estimation efficiency by combining the conditional estimating equation and the pairwise pseudo-score-based estimating equation that can eliminate the nuisance functions from the marginal likelihood of the truncation times. Asymptotic properties of the proposed estimators are discussed including the consistency and asymptotic normality. Extensive simulation studies are conducted to evaluate the empirical performance of the proposed methods, and suggest that the combined estimating equation approach is obviously more efficient than the conditional estimating equation approach. We then apply the proposed methods to a set of real data for illustration.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 3","pages":"672-697"},"PeriodicalIF":1.3,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9670548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1007/s10985-022-09583-3
Mei-Ling Ting Lee, G A Whitmore
The progression of disease for an individual can be described mathematically as a stochastic process. The individual experiences a failure event when the disease path first reaches or crosses a critical disease level. This happening defines a failure event and a first hitting time or time-to-event, both of which are important in medical contexts. When the context involves explanatory variables then there is usually an interest in incorporating regression structures into the analysis and the methodology known as threshold regression comes into play. To date, most applications of threshold regression have been based on parametric families of stochastic processes. This paper presents a semiparametric form of threshold regression that requires the stochastic process to have only one key property, namely, stationary independent increments. As this property is frequently encountered in real applications, this model has potential for use in many fields. The mathematical underpinnings of this semiparametric approach for estimation and prediction are described. The basic data element required by the model is a pair of readings representing the observed change in time and the observed change in disease level, arising from either a failure event or survival of the individual to the end of the data record. An extension is presented for applications where the underlying disease process is unobservable but component covariate processes are available to construct a surrogate disease process. Threshold regression, used in combination with a data technique called Markov decomposition, allows the methods to handle longitudinal time-to-event data by uncoupling a longitudinal record into a sequence of single records. Computational aspects of the methods are straightforward. An array of simulation experiments that verify computational feasibility and statistical inference are reported in an online supplement. Case applications based on longitudinal observational data from The Osteoarthritis Initiative (OAI) study are presented to demonstrate the methodology and its practical use.
{"title":"Semiparametric predictive inference for failure data using first-hitting-time threshold regression.","authors":"Mei-Ling Ting Lee, G A Whitmore","doi":"10.1007/s10985-022-09583-3","DOIUrl":"https://doi.org/10.1007/s10985-022-09583-3","url":null,"abstract":"<p><p>The progression of disease for an individual can be described mathematically as a stochastic process. The individual experiences a failure event when the disease path first reaches or crosses a critical disease level. This happening defines a failure event and a first hitting time or time-to-event, both of which are important in medical contexts. When the context involves explanatory variables then there is usually an interest in incorporating regression structures into the analysis and the methodology known as threshold regression comes into play. To date, most applications of threshold regression have been based on parametric families of stochastic processes. This paper presents a semiparametric form of threshold regression that requires the stochastic process to have only one key property, namely, stationary independent increments. As this property is frequently encountered in real applications, this model has potential for use in many fields. The mathematical underpinnings of this semiparametric approach for estimation and prediction are described. The basic data element required by the model is a pair of readings representing the observed change in time and the observed change in disease level, arising from either a failure event or survival of the individual to the end of the data record. An extension is presented for applications where the underlying disease process is unobservable but component covariate processes are available to construct a surrogate disease process. Threshold regression, used in combination with a data technique called Markov decomposition, allows the methods to handle longitudinal time-to-event data by uncoupling a longitudinal record into a sequence of single records. Computational aspects of the methods are straightforward. An array of simulation experiments that verify computational feasibility and statistical inference are reported in an online supplement. Case applications based on longitudinal observational data from The Osteoarthritis Initiative (OAI) study are presented to demonstrate the methodology and its practical use.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 3","pages":"508-536"},"PeriodicalIF":1.3,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9615404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01Epub Date: 2023-02-18DOI: 10.1007/s10985-023-09592-w
S O Samuelsen, O O Aalen
{"title":"Special issue dedicated to Ørnulf Borgan.","authors":"S O Samuelsen, O O Aalen","doi":"10.1007/s10985-023-09592-w","DOIUrl":"10.1007/s10985-023-09592-w","url":null,"abstract":"","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 2","pages":"253-255"},"PeriodicalIF":1.3,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9937859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9095974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1007/s10985-022-09553-9
Riccardo De Bin, Vegard Grødem Stikbakke
In this paper we propose a boosting algorithm to extend the applicability of a first hitting time model to high-dimensional frameworks. Based on an underlying stochastic process, first hitting time models do not require the proportional hazards assumption, hardly verifiable in the high-dimensional context, and represent a valid parametric alternative to the Cox model for modelling time-to-event responses. First hitting time models also offer a natural way to integrate low-dimensional clinical and high-dimensional molecular information in a prediction model, that avoids complicated weighting schemes typical of current methods. The performance of our novel boosting algorithm is illustrated in three real data examples.
{"title":"A boosting first-hitting-time model for survival analysis in high-dimensional settings.","authors":"Riccardo De Bin, Vegard Grødem Stikbakke","doi":"10.1007/s10985-022-09553-9","DOIUrl":"https://doi.org/10.1007/s10985-022-09553-9","url":null,"abstract":"<p><p>In this paper we propose a boosting algorithm to extend the applicability of a first hitting time model to high-dimensional frameworks. Based on an underlying stochastic process, first hitting time models do not require the proportional hazards assumption, hardly verifiable in the high-dimensional context, and represent a valid parametric alternative to the Cox model for modelling time-to-event responses. First hitting time models also offer a natural way to integrate low-dimensional clinical and high-dimensional molecular information in a prediction model, that avoids complicated weighting schemes typical of current methods. The performance of our novel boosting algorithm is illustrated in three real data examples.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 2","pages":"420-440"},"PeriodicalIF":1.3,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10006065/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9147398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}