Pub Date : 2023-07-01DOI: 10.1007/s10985-023-09596-6
Tianyi Lu, Shuwei Li, Liuquan Sun
Interval-censored failure time data arise commonly in various scientific studies where the failure time of interest is only known to lie in a certain time interval rather than observed exactly. In addition, left truncation on the failure event may occur and can greatly complicate the statistical analysis. In this paper, we investigate regression analysis of left-truncated and interval-censored data with the commonly used additive hazards model. Specifically, we propose a conditional estimating equation approach for the estimation, and further improve its estimation efficiency by combining the conditional estimating equation and the pairwise pseudo-score-based estimating equation that can eliminate the nuisance functions from the marginal likelihood of the truncation times. Asymptotic properties of the proposed estimators are discussed including the consistency and asymptotic normality. Extensive simulation studies are conducted to evaluate the empirical performance of the proposed methods, and suggest that the combined estimating equation approach is obviously more efficient than the conditional estimating equation approach. We then apply the proposed methods to a set of real data for illustration.
{"title":"Combined estimating equation approaches for the additive hazards model with left-truncated and interval-censored data.","authors":"Tianyi Lu, Shuwei Li, Liuquan Sun","doi":"10.1007/s10985-023-09596-6","DOIUrl":"https://doi.org/10.1007/s10985-023-09596-6","url":null,"abstract":"<p><p>Interval-censored failure time data arise commonly in various scientific studies where the failure time of interest is only known to lie in a certain time interval rather than observed exactly. In addition, left truncation on the failure event may occur and can greatly complicate the statistical analysis. In this paper, we investigate regression analysis of left-truncated and interval-censored data with the commonly used additive hazards model. Specifically, we propose a conditional estimating equation approach for the estimation, and further improve its estimation efficiency by combining the conditional estimating equation and the pairwise pseudo-score-based estimating equation that can eliminate the nuisance functions from the marginal likelihood of the truncation times. Asymptotic properties of the proposed estimators are discussed including the consistency and asymptotic normality. Extensive simulation studies are conducted to evaluate the empirical performance of the proposed methods, and suggest that the combined estimating equation approach is obviously more efficient than the conditional estimating equation approach. We then apply the proposed methods to a set of real data for illustration.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 3","pages":"672-697"},"PeriodicalIF":1.3,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9670548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1007/s10985-022-09583-3
Mei-Ling Ting Lee, G A Whitmore
The progression of disease for an individual can be described mathematically as a stochastic process. The individual experiences a failure event when the disease path first reaches or crosses a critical disease level. This happening defines a failure event and a first hitting time or time-to-event, both of which are important in medical contexts. When the context involves explanatory variables then there is usually an interest in incorporating regression structures into the analysis and the methodology known as threshold regression comes into play. To date, most applications of threshold regression have been based on parametric families of stochastic processes. This paper presents a semiparametric form of threshold regression that requires the stochastic process to have only one key property, namely, stationary independent increments. As this property is frequently encountered in real applications, this model has potential for use in many fields. The mathematical underpinnings of this semiparametric approach for estimation and prediction are described. The basic data element required by the model is a pair of readings representing the observed change in time and the observed change in disease level, arising from either a failure event or survival of the individual to the end of the data record. An extension is presented for applications where the underlying disease process is unobservable but component covariate processes are available to construct a surrogate disease process. Threshold regression, used in combination with a data technique called Markov decomposition, allows the methods to handle longitudinal time-to-event data by uncoupling a longitudinal record into a sequence of single records. Computational aspects of the methods are straightforward. An array of simulation experiments that verify computational feasibility and statistical inference are reported in an online supplement. Case applications based on longitudinal observational data from The Osteoarthritis Initiative (OAI) study are presented to demonstrate the methodology and its practical use.
{"title":"Semiparametric predictive inference for failure data using first-hitting-time threshold regression.","authors":"Mei-Ling Ting Lee, G A Whitmore","doi":"10.1007/s10985-022-09583-3","DOIUrl":"https://doi.org/10.1007/s10985-022-09583-3","url":null,"abstract":"<p><p>The progression of disease for an individual can be described mathematically as a stochastic process. The individual experiences a failure event when the disease path first reaches or crosses a critical disease level. This happening defines a failure event and a first hitting time or time-to-event, both of which are important in medical contexts. When the context involves explanatory variables then there is usually an interest in incorporating regression structures into the analysis and the methodology known as threshold regression comes into play. To date, most applications of threshold regression have been based on parametric families of stochastic processes. This paper presents a semiparametric form of threshold regression that requires the stochastic process to have only one key property, namely, stationary independent increments. As this property is frequently encountered in real applications, this model has potential for use in many fields. The mathematical underpinnings of this semiparametric approach for estimation and prediction are described. The basic data element required by the model is a pair of readings representing the observed change in time and the observed change in disease level, arising from either a failure event or survival of the individual to the end of the data record. An extension is presented for applications where the underlying disease process is unobservable but component covariate processes are available to construct a surrogate disease process. Threshold regression, used in combination with a data technique called Markov decomposition, allows the methods to handle longitudinal time-to-event data by uncoupling a longitudinal record into a sequence of single records. Computational aspects of the methods are straightforward. An array of simulation experiments that verify computational feasibility and statistical inference are reported in an online supplement. Case applications based on longitudinal observational data from The Osteoarthritis Initiative (OAI) study are presented to demonstrate the methodology and its practical use.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 3","pages":"508-536"},"PeriodicalIF":1.3,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9615404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01Epub Date: 2023-02-18DOI: 10.1007/s10985-023-09592-w
S O Samuelsen, O O Aalen
{"title":"Special issue dedicated to Ørnulf Borgan.","authors":"S O Samuelsen, O O Aalen","doi":"10.1007/s10985-023-09592-w","DOIUrl":"10.1007/s10985-023-09592-w","url":null,"abstract":"","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 2","pages":"253-255"},"PeriodicalIF":1.3,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9937859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9095974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1007/s10985-022-09553-9
Riccardo De Bin, Vegard Grødem Stikbakke
In this paper we propose a boosting algorithm to extend the applicability of a first hitting time model to high-dimensional frameworks. Based on an underlying stochastic process, first hitting time models do not require the proportional hazards assumption, hardly verifiable in the high-dimensional context, and represent a valid parametric alternative to the Cox model for modelling time-to-event responses. First hitting time models also offer a natural way to integrate low-dimensional clinical and high-dimensional molecular information in a prediction model, that avoids complicated weighting schemes typical of current methods. The performance of our novel boosting algorithm is illustrated in three real data examples.
{"title":"A boosting first-hitting-time model for survival analysis in high-dimensional settings.","authors":"Riccardo De Bin, Vegard Grødem Stikbakke","doi":"10.1007/s10985-022-09553-9","DOIUrl":"https://doi.org/10.1007/s10985-022-09553-9","url":null,"abstract":"<p><p>In this paper we propose a boosting algorithm to extend the applicability of a first hitting time model to high-dimensional frameworks. Based on an underlying stochastic process, first hitting time models do not require the proportional hazards assumption, hardly verifiable in the high-dimensional context, and represent a valid parametric alternative to the Cox model for modelling time-to-event responses. First hitting time models also offer a natural way to integrate low-dimensional clinical and high-dimensional molecular information in a prediction model, that avoids complicated weighting schemes typical of current methods. The performance of our novel boosting algorithm is illustrated in three real data examples.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 2","pages":"420-440"},"PeriodicalIF":1.3,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10006065/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9147398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01Epub Date: 2021-10-26DOI: 10.1007/s10985-021-09535-3
Nils Lid Hjort, Emil Aas Stoltenberg
Aalen's linear hazard rate regression model is a useful and increasingly popular alternative to Cox' multiplicative hazard rate model. It postulates that an individual has hazard rate function [Formula: see text] in terms of his covariate values [Formula: see text]. These are typically levels of various hazard factors, and may also be time-dependent. The hazard factor functions [Formula: see text] are the parameters of the model and are estimated from data. This is traditionally accomplished in a fully nonparametric way. This paper develops methodology for estimating the hazard factor functions when some of them are modelled parametrically while the others are left unspecified. Large-sample results are reached inside this partly parametric, partly nonparametric framework, which also enables us to assess the goodness of fit of the model's parametric components. In addition, these results are used to pinpoint how much precision is gained, using the parametric-nonparametric model, over the standard nonparametric method. A real-data application is included, along with a brief simulation study.
{"title":"The partly parametric and partly nonparametric additive risk model.","authors":"Nils Lid Hjort, Emil Aas Stoltenberg","doi":"10.1007/s10985-021-09535-3","DOIUrl":"10.1007/s10985-021-09535-3","url":null,"abstract":"<p><p>Aalen's linear hazard rate regression model is a useful and increasingly popular alternative to Cox' multiplicative hazard rate model. It postulates that an individual has hazard rate function [Formula: see text] in terms of his covariate values [Formula: see text]. These are typically levels of various hazard factors, and may also be time-dependent. The hazard factor functions [Formula: see text] are the parameters of the model and are estimated from data. This is traditionally accomplished in a fully nonparametric way. This paper develops methodology for estimating the hazard factor functions when some of them are modelled parametrically while the others are left unspecified. Large-sample results are reached inside this partly parametric, partly nonparametric framework, which also enables us to assess the goodness of fit of the model's parametric components. In addition, these results are used to pinpoint how much precision is gained, using the parametric-nonparametric model, over the standard nonparametric method. A real-data application is included, along with a brief simulation study.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 2","pages":"372-402"},"PeriodicalIF":1.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10006282/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9493084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1007/s10985-022-09578-0
Sven Ove Samuelsen
It is well-known that the additive hazards model is collapsible, in the sense that when omitting one covariate from a model with two independent covariates, the marginal model is still an additive hazards model with the same regression coefficient or function for the remaining covariate. In contrast, for the proportional hazards model under the same covariate assumption, the marginal model is no longer a proportional hazards model and is not collapsible. These results, however, relate to the model specification and not to the regression parameter estimators. We point out that if covariates in risk sets at all event times are independent then both Cox and Aalen regression estimators are collapsible, in the sense that the parameter estimators in the full and marginal models are consistent for the same value. Vice-versa, if this assumption fails, then the estimates will change systematically both for Cox and Aalen regression. In particular, if the data are generated by an Aalen model with censoring independent of covariates both Cox and Aalen regression is collapsible, but if generated by a proportional hazards model neither estimators are. We will also discuss settings where survival times are generated by proportional hazards models with censoring patterns providing uncorrelated covariates and hence collapsible Cox and Aalen regression estimates. Furthermore, possible consequences for instrumental variable analyses are discussed.
{"title":"Cox regression can be collapsible and Aalen regression can be non-collapsible.","authors":"Sven Ove Samuelsen","doi":"10.1007/s10985-022-09578-0","DOIUrl":"https://doi.org/10.1007/s10985-022-09578-0","url":null,"abstract":"<p><p>It is well-known that the additive hazards model is collapsible, in the sense that when omitting one covariate from a model with two independent covariates, the marginal model is still an additive hazards model with the same regression coefficient or function for the remaining covariate. In contrast, for the proportional hazards model under the same covariate assumption, the marginal model is no longer a proportional hazards model and is not collapsible. These results, however, relate to the model specification and not to the regression parameter estimators. We point out that if covariates in risk sets at all event times are independent then both Cox and Aalen regression estimators are collapsible, in the sense that the parameter estimators in the full and marginal models are consistent for the same value. Vice-versa, if this assumption fails, then the estimates will change systematically both for Cox and Aalen regression. In particular, if the data are generated by an Aalen model with censoring independent of covariates both Cox and Aalen regression is collapsible, but if generated by a proportional hazards model neither estimators are. We will also discuss settings where survival times are generated by proportional hazards models with censoring patterns providing uncorrelated covariates and hence collapsible Cox and Aalen regression estimates. Furthermore, possible consequences for instrumental variable analyses are discussed.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 2","pages":"403-419"},"PeriodicalIF":1.3,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10006274/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9494612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1007/s10985-022-09564-6
Paul Frédéric Blanche, Anders Holt, Thomas Scheike
Simple logistic regression can be adapted to deal with right-censoring by inverse probability of censoring weighting (IPCW). We here compare two such IPCW approaches, one based on weighting the outcome, the other based on weighting the estimating equations. We study the large sample properties of the two approaches and show that which of the two weighting methods is the most efficient depends on the censoring distribution. We show by theoretical computations that the methods can be surprisingly different in realistic settings. We further show how to use the two weighting approaches for logistic regression to estimate causal treatment effects, for both observational studies and randomized clinical trials (RCT). Several estimators for observational studies are compared and we present an application to registry data. We also revisit interesting robustness properties of logistic regression in the context of RCTs, with a particular focus on the IPCW weighting. We find that these robustness properties still hold when the censoring weights are correctly specified, but not necessarily otherwise.
{"title":"On logistic regression with right censored data, with or without competing risks, and its use for estimating treatment effects.","authors":"Paul Frédéric Blanche, Anders Holt, Thomas Scheike","doi":"10.1007/s10985-022-09564-6","DOIUrl":"https://doi.org/10.1007/s10985-022-09564-6","url":null,"abstract":"<p><p>Simple logistic regression can be adapted to deal with right-censoring by inverse probability of censoring weighting (IPCW). We here compare two such IPCW approaches, one based on weighting the outcome, the other based on weighting the estimating equations. We study the large sample properties of the two approaches and show that which of the two weighting methods is the most efficient depends on the censoring distribution. We show by theoretical computations that the methods can be surprisingly different in realistic settings. We further show how to use the two weighting approaches for logistic regression to estimate causal treatment effects, for both observational studies and randomized clinical trials (RCT). Several estimators for observational studies are compared and we present an application to registry data. We also revisit interesting robustness properties of logistic regression in the context of RCTs, with a particular focus on the IPCW weighting. We find that these robustness properties still hold when the censoring weights are correctly specified, but not necessarily otherwise.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 2","pages":"441-482"},"PeriodicalIF":1.3,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9134954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1007/s10985-022-09582-4
Larry Goldstein, Bryan Langholz
Nested case-control sampled event time data under a highly stratified proportional hazards model, in which the number of strata increases proportional to sample size, is described and analyzed. The data can be characterized as stratified sampling from the event time risk sets and the analysis approach of Borgan et al. (Ann Stat 23:1749-1778, 1995) is adapted to accommodate both the stratification and case-control sampling from the stratified risk sets. Conditions for the consistency and asymptotic normality of the maximum partial likelihood estimator are provided and the results are used to compare the efficiency of the stratified analysis to an unstratified analysis when the baseline hazards can be semi-parametrically modeled in two special cases. Using the stratified sampling representation of the stratified analysis, methods for absolute risk estimation described by Borgan et al. (1995) for nested case-control data are used to develop methods for absolute risk estimation under the stratified model. The methods are illustrated by a year of birth stratified analysis of radon exposure and lung cancer mortality in a cohort of uranium miners from the Colorado Plateau.
描述和分析了高度分层比例风险模型下嵌套病例-对照抽样事件时间数据,其中分层数量与样本量成比例增加。数据可以被描述为来自事件时间风险集的分层抽样,并且Borgan等人(Ann Stat 23:1749-1778, 1995)的分析方法适用于分层风险集的分层抽样和病例对照抽样。给出了最大部分似然估计的一致性和渐近正态性的条件,并用结果比较了在两种特殊情况下,当基线危害可以半参数化建模时,分层分析与非分层分析的效率。利用分层分析的分层抽样表示,利用Borgan等人(1995)对嵌套病例对照数据描述的绝对风险估计方法,开发分层模型下的绝对风险估计方法。对科罗拉多高原一组铀矿工人一年的氡暴露和肺癌死亡率的出生分层分析说明了这些方法。
{"title":"Analysis and asymptotic theory for nested case-control designs under highly stratified proportional hazards models.","authors":"Larry Goldstein, Bryan Langholz","doi":"10.1007/s10985-022-09582-4","DOIUrl":"https://doi.org/10.1007/s10985-022-09582-4","url":null,"abstract":"<p><p>Nested case-control sampled event time data under a highly stratified proportional hazards model, in which the number of strata increases proportional to sample size, is described and analyzed. The data can be characterized as stratified sampling from the event time risk sets and the analysis approach of Borgan et al. (Ann Stat 23:1749-1778, 1995) is adapted to accommodate both the stratification and case-control sampling from the stratified risk sets. Conditions for the consistency and asymptotic normality of the maximum partial likelihood estimator are provided and the results are used to compare the efficiency of the stratified analysis to an unstratified analysis when the baseline hazards can be semi-parametrically modeled in two special cases. Using the stratified sampling representation of the stratified analysis, methods for absolute risk estimation described by Borgan et al. (1995) for nested case-control data are used to develop methods for absolute risk estimation under the stratified model. The methods are illustrated by a year of birth stratified analysis of radon exposure and lung cancer mortality in a cohort of uranium miners from the Colorado Plateau.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 2","pages":"342-371"},"PeriodicalIF":1.3,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9139939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1007/s10985-022-09547-7
Bo Henry Lindqvist
We first review some main results for phase-type distributions, including a discussion of Coxian distributions and their canonical representations. We then consider the extension of phase-type modeling to cover competing risks. This extension involves the consideration of finite state Markov chains with more than one absorbing state, letting each absorbing state correspond to a particular risk. The non-uniqueness of Markov chain representations of phase-type distributions is well known. In the paper we study corresponding issues for the competing risks case with the aim of obtaining identifiable parameterizations. Statistical inference for the Coxian competing risks model is briefly discussed and some real data are analyzed for illustration.
{"title":"Phase-type models for competing risks, with emphasis on identifiability issues.","authors":"Bo Henry Lindqvist","doi":"10.1007/s10985-022-09547-7","DOIUrl":"https://doi.org/10.1007/s10985-022-09547-7","url":null,"abstract":"<p><p>We first review some main results for phase-type distributions, including a discussion of Coxian distributions and their canonical representations. We then consider the extension of phase-type modeling to cover competing risks. This extension involves the consideration of finite state Markov chains with more than one absorbing state, letting each absorbing state correspond to a particular risk. The non-uniqueness of Markov chain representations of phase-type distributions is well known. In the paper we study corresponding issues for the competing risks case with the aim of obtaining identifiable parameterizations. Statistical inference for the Coxian competing risks model is briefly discussed and some real data are analyzed for illustration.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 2","pages":"318-341"},"PeriodicalIF":1.3,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10006281/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9194640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1007/s10985-021-09533-5
Julie K Furberg, Per K Andersen, Sofie Korn, Morten Overgaard, Henrik Ravn
The analysis of recurrent events in the presence of terminal events requires special attention. Several approaches have been suggested for such analyses either using intensity models or marginal models. When analysing treatment effects on recurrent events in controlled trials, special attention should be paid to competing deaths and their impact on interpretation. This paper proposes a method that formulates a marginal model for recurrent events and terminal events simultaneously. Estimation is based on pseudo-observations for both the expected number of events and survival probabilities. Various relevant hypothesis tests in the framework are explored. Theoretical derivations and simulation studies are conducted to investigate the behaviour of the method. The method is applied to two real data examples. The bivariate marginal pseudo-observation model carries the strength of a two-dimensional modelling procedure and performs well in comparison with available models. Finally, an extension to a three-dimensional model, which decomposes the terminal event per death cause, is proposed and exemplified.
{"title":"Bivariate pseudo-observations for recurrent event analysis with terminal events.","authors":"Julie K Furberg, Per K Andersen, Sofie Korn, Morten Overgaard, Henrik Ravn","doi":"10.1007/s10985-021-09533-5","DOIUrl":"https://doi.org/10.1007/s10985-021-09533-5","url":null,"abstract":"<p><p>The analysis of recurrent events in the presence of terminal events requires special attention. Several approaches have been suggested for such analyses either using intensity models or marginal models. When analysing treatment effects on recurrent events in controlled trials, special attention should be paid to competing deaths and their impact on interpretation. This paper proposes a method that formulates a marginal model for recurrent events and terminal events simultaneously. Estimation is based on pseudo-observations for both the expected number of events and survival probabilities. Various relevant hypothesis tests in the framework are explored. Theoretical derivations and simulation studies are conducted to investigate the behaviour of the method. The method is applied to two real data examples. The bivariate marginal pseudo-observation model carries the strength of a two-dimensional modelling procedure and performs well in comparison with available models. Finally, an extension to a three-dimensional model, which decomposes the terminal event per death cause, is proposed and exemplified.</p>","PeriodicalId":49908,"journal":{"name":"Lifetime Data Analysis","volume":"29 2","pages":"256-287"},"PeriodicalIF":1.3,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9140831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}