Pub Date : 2026-01-12DOI: 10.1177/09622802251404054
Tsung-I Lin, Wan-Lun Wang
There has been growing interest across various research domains in the modeling and clustering of multivariate longitudinal trajectories obtained from internally near-homogeneous subgroups. One prominent motivation for such work arises from the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort study, which involves multiple clinical measurements, exhibiting complex features such as diverse progression patterns, multimodality, and the presence of atypical observations. To tackle the challenges associated with modeling and clustering such grouped longitudinal data, we propose a finite mixture of multivariate contaminated normal linear mixed model (FM-MCNLMM) and its extended version, referred to as the EFM-MCNLMM, which allows the mixing weights to potentially depend on concomitant covariates. We develop alternating expectation conditional maximization algorithms to carry out maximum likelihood estimation for the two models. The utility and effectiveness of the proposed methodology are demonstrated through simulations and analysis of the ADNI data.
{"title":"Grouped multi-trajectory modeling using finite mixtures of multivariate contaminated normal linear mixed model.","authors":"Tsung-I Lin, Wan-Lun Wang","doi":"10.1177/09622802251404054","DOIUrl":"https://doi.org/10.1177/09622802251404054","url":null,"abstract":"<p><p>There has been growing interest across various research domains in the modeling and clustering of multivariate longitudinal trajectories obtained from internally near-homogeneous subgroups. One prominent motivation for such work arises from the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort study, which involves multiple clinical measurements, exhibiting complex features such as diverse progression patterns, multimodality, and the presence of atypical observations. To tackle the challenges associated with modeling and clustering such grouped longitudinal data, we propose a finite mixture of multivariate contaminated normal linear mixed model (FM-MCNLMM) and its extended version, referred to as the EFM-MCNLMM, which allows the mixing weights to potentially depend on concomitant covariates. We develop alternating expectation conditional maximization algorithms to carry out maximum likelihood estimation for the two models. The utility and effectiveness of the proposed methodology are demonstrated through simulations and analysis of the ADNI data.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251404054"},"PeriodicalIF":1.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1177/09622802251406581
Floriane Le Vilain-Abraham, Solène Desmée, Jennifer A Thompson, Jean-Claude Lacherade, Elsa Tavernier, Etienne Dantan, Agnès Caille
In randomized clinical trials with a time-to-event outcome, the intervention effect could be quantified by a difference in restricted mean survival time (ΔRMST) between the intervention and control groups, defined as the expected survival duration gain due to the intervention over a fixed follow-up period. In cluster randomized trials (CRTs), social units are randomized to intervention or control groups; the correlation between survival times of the individuals within the same cluster must be taken into account in the statistical analysis. In a previous work, we proposed the use of pseudo-values regression, based on generalized estimating equations (GEEs), for estimating ΔRMST in CRTs. We showed that this method correctly estimated the ΔRMST and controlled the type I error rate in CRTs with at least 50 clusters. Here, we propose methods for CRTs with a small number of clusters (<50). We evaluated the performance of four bias-corrections of the GEE sandwich variance estimator of the intervention effect. We also considered the use of a Student t distribution as an alternative to the normal distribution of the GEE Wald test statistic for testing the intervention effect and constructing the confidence interval. With a simulation study, assuming proportional or non-proportional hazards, we showed that the Student t distribution outperformed the normal distribution in terms of type I error rate, and the Fay and Graubard bias-corrected variance led to an appropriate type I error rate whatever the number of clusters. Therefore, we recommend the use of the Fay and Graubard variance estimator combined with a Student t distribution for the pseudo-values regression to correctly estimate the variance of the intervention effect. Finally, we provide an illustrative analysis of the DEMETER trial evaluating the use of a specific endotracheal tube for subglottic secretion drainage to prevent ventilator-associated pneumonia, by comparing each of the methods considered.
{"title":"Restricted mean survival time in cluster randomized trials with a small number of clusters: Improving variance estimation of the intervention effect from the pseudo-values regression.","authors":"Floriane Le Vilain-Abraham, Solène Desmée, Jennifer A Thompson, Jean-Claude Lacherade, Elsa Tavernier, Etienne Dantan, Agnès Caille","doi":"10.1177/09622802251406581","DOIUrl":"https://doi.org/10.1177/09622802251406581","url":null,"abstract":"<p><p>In randomized clinical trials with a time-to-event outcome, the intervention effect could be quantified by a difference in restricted mean survival time (ΔRMST) between the intervention and control groups, defined as the expected survival duration gain due to the intervention over a fixed follow-up period. In cluster randomized trials (CRTs), social units are randomized to intervention or control groups; the correlation between survival times of the individuals within the same cluster must be taken into account in the statistical analysis. In a previous work, we proposed the use of pseudo-values regression, based on generalized estimating equations (GEEs), for estimating ΔRMST in CRTs. We showed that this method correctly estimated the ΔRMST and controlled the type I error rate in CRTs with at least 50 clusters. Here, we propose methods for CRTs with a small number of clusters (<50). We evaluated the performance of four bias-corrections of the GEE sandwich variance estimator of the intervention effect. We also considered the use of a Student <i>t</i> distribution as an alternative to the normal distribution of the GEE Wald test statistic for testing the intervention effect and constructing the confidence interval. With a simulation study, assuming proportional or non-proportional hazards, we showed that the Student <i>t</i> distribution outperformed the normal distribution in terms of type I error rate, and the Fay and Graubard bias-corrected variance led to an appropriate type I error rate whatever the number of clusters. Therefore, we recommend the use of the Fay and Graubard variance estimator combined with a Student <i>t</i> distribution for the pseudo-values regression to correctly estimate the variance of the intervention effect. Finally, we provide an illustrative analysis of the DEMETER trial evaluating the use of a specific endotracheal tube for subglottic secretion drainage to prevent ventilator-associated pneumonia, by comparing each of the methods considered.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251406581"},"PeriodicalIF":1.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the context of longitudinal data regression modeling, individuals often have two or more response indicators, and these response indicators are typically correlated to some extent. Additionally, in the field of clinical medicine, the response indicators of longitudinal data are often ordinal. For the joint modeling of multivariate ordinal longitudinal data, methods based on mean regression (MR) are commonly used to study latent variables. However, for data with non-normal errors, MR methods often perform poorly. As an alternative to MR methods, composite quantile regression (CQR) can overcome the limitations of MR methods and provide more robust estimates. This article proposes a joint relative composite quantile regression method (joint relative CQR) for multivariate ordinal longitudinal data and investigates its application to a set of longitudinal medical datasets on dementia. Firstly, the joint relative CQR method for multivariate ordinal longitudinal data is constructed based on the pseudo composite asymmetric Laplace distribution (PCALD) and latent variable models. Secondly, the parameter estimation problem of the model is studied using MCMC algorithms. Finally, Monte Carlo simulations and a set of longitudinal medical datasets on dementia validate the effectiveness of the proposed model and method.
{"title":"Joint modeling of composite quantile regression for multiple ordinal longitudinal data with its applications to a dementia dataset.","authors":"Shuqing Liang, Lina Bian, Qi Yang, Yuzhu Tian, Maozai Tian","doi":"10.1177/09622802251412838","DOIUrl":"https://doi.org/10.1177/09622802251412838","url":null,"abstract":"<p><p>In the context of longitudinal data regression modeling, individuals often have two or more response indicators, and these response indicators are typically correlated to some extent. Additionally, in the field of clinical medicine, the response indicators of longitudinal data are often ordinal. For the joint modeling of multivariate ordinal longitudinal data, methods based on mean regression (MR) are commonly used to study latent variables. However, for data with non-normal errors, MR methods often perform poorly. As an alternative to MR methods, composite quantile regression (CQR) can overcome the limitations of MR methods and provide more robust estimates. This article proposes a joint relative composite quantile regression method (joint relative CQR) for multivariate ordinal longitudinal data and investigates its application to a set of longitudinal medical datasets on dementia. Firstly, the joint relative CQR method for multivariate ordinal longitudinal data is constructed based on the pseudo composite asymmetric Laplace distribution (PCALD) and latent variable models. Secondly, the parameter estimation problem of the model is studied using MCMC algorithms. Finally, Monte Carlo simulations and a set of longitudinal medical datasets on dementia validate the effectiveness of the proposed model and method.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251412838"},"PeriodicalIF":1.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1177/09622802251406588
Taban Baghfalaki, Reza Hashemi, Catherine Helmer, Helene Jacqmin-Gadda
Joint modeling of multiple longitudinal markers and time-to-event outcomes is common in clinical studies. However, as the number of markers increases, estimation becomes computationally challenging or infeasible due to long runtimes and convergence difficulties. We propose a novel two-stage Bayesian approach for estimating joint models involving multiple longitudinal measurements and time-to-event outcomes. The proposed method is related to the standard two-stage approach, which separately estimates longitudinal submodels and then incorporates their outputs as time-dependent covariates in a survival model. Unlike the standard method, our first stage estimates separate one-marker joint models for the event and each longitudinal marker, rather than relying on mixed-effects models. From these models, predictions of expected current values and/or slopes of individual marker trajectories are obtained, thereby avoiding bias due to informative dropout. In the second stage, a proportional hazards model is fitted that includes the predicted current values and/or slopes of all markers as time-dependent covariates. To account for uncertainty in the first-stage predictions, a multiple imputation strategy is employed when estimating the survival model. This approach enables the construction of prediction models based on a large number of longitudinal markers that would otherwise be computationally intractable using conventional multi-marker joint models. The performance of the proposed method is evaluated through simulation studies and an application to the public PBC2 dataset. Additionally, it is applied to predict dementia risk using a real-world dataset with seventeen longitudinal markers. To facilitate practical use, we developed an R package, TSJM, which is freely available on GitHub: https://github.com/tbaghfalaki/TSJM.
{"title":"A two-stage joint modeling approach for multiple longitudinal markers and time-to-event data.","authors":"Taban Baghfalaki, Reza Hashemi, Catherine Helmer, Helene Jacqmin-Gadda","doi":"10.1177/09622802251406588","DOIUrl":"https://doi.org/10.1177/09622802251406588","url":null,"abstract":"<p><p>Joint modeling of multiple longitudinal markers and time-to-event outcomes is common in clinical studies. However, as the number of markers increases, estimation becomes computationally challenging or infeasible due to long runtimes and convergence difficulties. We propose a novel two-stage Bayesian approach for estimating joint models involving multiple longitudinal measurements and time-to-event outcomes. The proposed method is related to the standard two-stage approach, which separately estimates longitudinal submodels and then incorporates their outputs as time-dependent covariates in a survival model. Unlike the standard method, our first stage estimates separate one-marker joint models for the event and each longitudinal marker, rather than relying on mixed-effects models. From these models, predictions of expected current values and/or slopes of individual marker trajectories are obtained, thereby avoiding bias due to informative dropout. In the second stage, a proportional hazards model is fitted that includes the predicted current values and/or slopes of all markers as time-dependent covariates. To account for uncertainty in the first-stage predictions, a multiple imputation strategy is employed when estimating the survival model. This approach enables the construction of prediction models based on a large number of longitudinal markers that would otherwise be computationally intractable using conventional multi-marker joint models. The performance of the proposed method is evaluated through simulation studies and an application to the public PBC2 dataset. Additionally, it is applied to predict dementia risk using a real-world dataset with seventeen longitudinal markers. To facilitate practical use, we developed an R package, TSJM, which is freely available on GitHub: https://github.com/tbaghfalaki/TSJM.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251406588"},"PeriodicalIF":1.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When data become increasingly complex, desirable models are required to be more flexible for analyzing survival data. Building upon the existing functional Cox model, we introduce a novel functional varying-coefficient Cox model and the corresponding estimation algorithms are proposed in this article. The proposed model can simultaneously handle survival data with varying-coefficient covariates and functional covariates, thereby significantly enhancing the adaptability of survival models. The model performance is evaluated by simulation studies, and a real application using Alzheimer's disease neuroimaging initiative (ADNI) data is used to illustrate the practicality of the proposed model.
{"title":"Functional varying-coefficient Cox model and its application.","authors":"Fansheng Kong, Maozai Tian, Zhihao Wang, Man-Lai Tang","doi":"10.1177/09622802251406527","DOIUrl":"https://doi.org/10.1177/09622802251406527","url":null,"abstract":"<p><p>When data become increasingly complex, desirable models are required to be more flexible for analyzing survival data. Building upon the existing functional Cox model, we introduce a novel functional varying-coefficient Cox model and the corresponding estimation algorithms are proposed in this article. The proposed model can simultaneously handle survival data with varying-coefficient covariates and functional covariates, thereby significantly enhancing the adaptability of survival models. The model performance is evaluated by simulation studies, and a real application using Alzheimer's disease neuroimaging initiative (ADNI) data is used to illustrate the practicality of the proposed model.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251406527"},"PeriodicalIF":1.9,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02DOI: 10.1177/09622802251399917
Juliette Ortholand, Nicolas Gensollen, Stanley Durrleman, Sophie Tezenas Du Montcel
Heterogeneity of the progression of neurodegenerative diseases is one of the main challenges faced in developing therapies. Thanks to the increasing number of clinical databases, progression models have allowed a better understanding of this heterogeneity. Joint models have proven their effectiveness by combining longitudinal and survival data. Nevertheless, they require a reference time, which is ill-defined for neurodegenerative diseases, where biological underlying processes start before the first symptoms. In this work, we propose a joint non-linear mixed-effect model with a latent disease age, to overcome this need for a precise reference time. We used a longitudinal model with a latent disease age as a longitudinal sub-model. We associated it with a survival sub-model that estimates a Weibull distribution from the latent disease age. We validated our model on simulated data and benchmarked it with a state-of-the-art joint model on data from patients with Amyotrophic Lateral Sclerosis (ALS). Finally, we showed how the model could be used to describe ALS heterogeneity. Our model got significantly better results than the state-of-the-art joint model for absolute bias on ALS functional rating scale revised score (4.21(SD 4.41) versus 4.24(SD 4.14)(-value1.4)), and mean-cumulative-AUC for right-censored events on death (0.67(0.07) versus 0.61(0.09)(-value1.7)). To conclude, we propose a new model better suited in the context of unreliable reference time.
{"title":"Joint model with latent disease age: Overcoming the need for reference time.","authors":"Juliette Ortholand, Nicolas Gensollen, Stanley Durrleman, Sophie Tezenas Du Montcel","doi":"10.1177/09622802251399917","DOIUrl":"https://doi.org/10.1177/09622802251399917","url":null,"abstract":"<p><p>Heterogeneity of the progression of neurodegenerative diseases is one of the main challenges faced in developing therapies. Thanks to the increasing number of clinical databases, progression models have allowed a better understanding of this heterogeneity. Joint models have proven their effectiveness by combining longitudinal and survival data. Nevertheless, they require a reference time, which is ill-defined for neurodegenerative diseases, where biological underlying processes start before the first symptoms. In this work, we propose a joint non-linear mixed-effect model with a latent disease age, to overcome this need for a precise reference time. We used a longitudinal model with a latent disease age as a longitudinal sub-model. We associated it with a survival sub-model that estimates a Weibull distribution from the latent disease age. We validated our model on simulated data and benchmarked it with a state-of-the-art joint model on data from patients with Amyotrophic Lateral Sclerosis (ALS). Finally, we showed how the model could be used to describe ALS heterogeneity. Our model got significantly better results than the state-of-the-art joint model for absolute bias on ALS functional rating scale revised score (4.21(SD 4.41) versus 4.24(SD 4.14)(<math><mi>p</mi></math>-value<math><mo>=</mo></math>1.4<math><mo>×</mo><msup><mn>10</mn><mrow><mo>-</mo><mn>17</mn></mrow></msup></math>)), and mean-cumulative-AUC for right-censored events on death (0.67(0.07) versus 0.61(0.09)(<math><mi>p</mi></math>-value<math><mo>=</mo></math>1.7<math><mo>×</mo><msup><mn>10</mn><mrow><mo>-</mo><mn>03</mn></mrow></msup></math>)). To conclude, we propose a new model better suited in the context of unreliable reference time.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251399917"},"PeriodicalIF":1.9,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-14DOI: 10.1177/09622802251393666
Werner Brannath, Liane Kluge, Martin Scharpenberg
Simultaneous confidence intervals that are compatible with a given closed test procedure are often non-informative. More precisely, for a one-sided null hypothesis, the bound of the simultaneous confidence interval can stick to the border of the null hypothesis, irrespective of how far the point estimate deviates from the null hypothesis. This has been illustrated for the Bonferroni-Holm and fall-back procedures, for which alternative simultaneous confidence intervals have been suggested, that are free of this deficiency. These informative simultaneous confidence intervals are not fully compatible with the initial multiple test, but are close to it and hence provide similar power advantages. They provide a multiple hypothesis test with strong family wise error rate control that can be used in replacement of the initial multiple test. The current paper extends previous work for informative simultaneous confidence intervals to graphical test procedures. The information gained from the newly suggested simultaneous confidence intervals is shown to be always increasing with increasing evidence against a null hypothesis. The new simultaneous confidence intervals provide a compromise between information gain and the goal to reject as many hypotheses as possible. The simultaneous confidence intervals are defined via a family of dual graphs and the projection method. A simple iterative algorithm for the computation of the intervals is provided. A simulation study illustrates the results for a complex graphical test procedure.
{"title":"Informative simultaneous confidence intervals for graphical test procedures.","authors":"Werner Brannath, Liane Kluge, Martin Scharpenberg","doi":"10.1177/09622802251393666","DOIUrl":"10.1177/09622802251393666","url":null,"abstract":"<p><p>Simultaneous confidence intervals that are compatible with a given closed test procedure are often non-informative. More precisely, for a one-sided null hypothesis, the bound of the simultaneous confidence interval can stick to the border of the null hypothesis, irrespective of how far the point estimate deviates from the null hypothesis. This has been illustrated for the Bonferroni-Holm and fall-back procedures, for which alternative simultaneous confidence intervals have been suggested, that are free of this deficiency. These informative simultaneous confidence intervals are not fully compatible with the initial multiple test, but are close to it and hence provide similar power advantages. They provide a multiple hypothesis test with strong family wise error rate control that can be used in replacement of the initial multiple test. The current paper extends previous work for informative simultaneous confidence intervals to graphical test procedures. The information gained from the newly suggested simultaneous confidence intervals is shown to be always increasing with increasing evidence against a null hypothesis. The new simultaneous confidence intervals provide a compromise between information gain and the goal to reject as many hypotheses as possible. The simultaneous confidence intervals are defined via a family of dual graphs and the projection method. A simple iterative algorithm for the computation of the intervals is provided. A simulation study illustrates the results for a complex graphical test procedure.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"101-117"},"PeriodicalIF":1.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12783375/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145522858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-05DOI: 10.1177/09622802251403384
Helen Barnett, Oliver Boix, Dimitris Kontos, Thomas Jaki
Dual agent dose-finding trials study the effect of a combination of more than one agent, where the objective is to find the Maximum Tolerated Dose Combination, the combination of doses of the two agents that is associated with a pre-specified risk of being unsafe. In a Phase I/II setting, the objective is to find a dose combination that is both safe and active, the Optimal Biological Dose, that optimises a criterion based on both safety and activity. Since Oncology treatments are typically given over multiple cycles, both the safety and activity outcome can be considered as late-onset, potentially occurring in the later cycles of treatment. This work proposes two model-based designs for dual-agent dose finding studies with late-onset activity and late-onset toxicity outcomes, the Joint time-to-event (TITE) partial order continual reassessment method and the Joint TITE Bayesian logistic regression model. Their performance is compared alongside a model-assisted comparator in a comprehensive simulation study motivated by a real trial example, with an extension to consider alternative sized dosing grids. It is found that both model-based methods outperform the model-assisted design. Whilst on average the two model-based designs are comparable, this comparability is not consistent across scenarios.
{"title":"Joint time-to-event partial order continual reassessment method and Joint time-to-event Bayesian logistic regression model: Statistical designs for dual agent phase I/II dose finding studies with late-onset toxicity and activity outcomes.","authors":"Helen Barnett, Oliver Boix, Dimitris Kontos, Thomas Jaki","doi":"10.1177/09622802251403384","DOIUrl":"10.1177/09622802251403384","url":null,"abstract":"<p><p>Dual agent dose-finding trials study the effect of a combination of more than one agent, where the objective is to find the Maximum Tolerated Dose Combination, the combination of doses of the two agents that is associated with a pre-specified risk of being unsafe. In a Phase I/II setting, the objective is to find a dose combination that is both safe and active, the Optimal Biological Dose, that optimises a criterion based on both safety and activity. Since Oncology treatments are typically given over multiple cycles, both the safety and activity outcome can be considered as late-onset, potentially occurring in the later cycles of treatment. This work proposes two model-based designs for dual-agent dose finding studies with late-onset activity and late-onset toxicity outcomes, the Joint time-to-event (TITE) partial order continual reassessment method and the Joint TITE Bayesian logistic regression model. Their performance is compared alongside a model-assisted comparator in a comprehensive simulation study motivated by a real trial example, with an extension to consider alternative sized dosing grids. It is found that both model-based methods outperform the model-assisted design. Whilst on average the two model-based designs are comparable, this comparability is not consistent across scenarios.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"186-204"},"PeriodicalIF":1.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145688051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-31DOI: 10.1177/09622802251387456
Jingyi Xuan, Shahrul Mt-Isa, Nicholas R Latimer, Helen Bell Gorrod, William Malbecq, Kristel Vandormael, Victoria Yorke-Edwards, Ian R White
Inverse probability of censoring weighting is an approach used to estimate the hypothetical treatment effect that would have been observed in a clinical trial if certain intercurrent events had not occurred. Despite the unbiased estimates obtained by inverse probability of censoring weighting when its key assumptions are satisfied, large standard errors and wide confidence intervals can be potential concerns. Inverse probability of censoring weighting with unstabilised weights can be simply implemented by calculating the reciprocal of the probability of being uncensored by the intercurrent events. To improve precision, stabilisation can be realised by replacing the numerator in the unstabilised weights with functions of the time and baseline covariates. Here, we aim to investigate whether stabilised weight is a preferred choice and if so how we should specify the numerator. In a simulation study, we assessed the performance of inverse probability of censoring weighting implementations with unstabilised weights and with different forms of stabilisation when the outcome analysis model was correctly specified or mis-specified. Scenarios were designed to vary the prevalence of the intercurrent event in one or both randomised arms, the existence of a deterministic intercurrent event, the indirect effect through baseline covariates and overall treatment effect, the existence and the pattern of time-varying effect and sample size. Results show that compared with unstabilised weights, stabilisation improves the efficiency of the inverse probability of censoring weighting estimator in most cases and the improvement is obvious when we stabilise for the baseline covariates. However, stabilisation risks increasing the bias when the outcome analysis model is mis-specified.
{"title":"Using inverse probability of censoring weighting to estimate hypothetical estimands in clinical trials: Should we implement stabilisation, and if so how?","authors":"Jingyi Xuan, Shahrul Mt-Isa, Nicholas R Latimer, Helen Bell Gorrod, William Malbecq, Kristel Vandormael, Victoria Yorke-Edwards, Ian R White","doi":"10.1177/09622802251387456","DOIUrl":"10.1177/09622802251387456","url":null,"abstract":"<p><p>Inverse probability of censoring weighting is an approach used to estimate the hypothetical treatment effect that would have been observed in a clinical trial if certain intercurrent events had not occurred. Despite the unbiased estimates obtained by inverse probability of censoring weighting when its key assumptions are satisfied, large standard errors and wide confidence intervals can be potential concerns. Inverse probability of censoring weighting with unstabilised weights can be simply implemented by calculating the reciprocal of the probability of being uncensored by the intercurrent events. To improve precision, stabilisation can be realised by replacing the numerator in the unstabilised weights with functions of the time and baseline covariates. Here, we aim to investigate whether stabilised weight is a preferred choice and if so how we should specify the numerator. In a simulation study, we assessed the performance of inverse probability of censoring weighting implementations with unstabilised weights and with different forms of stabilisation when the outcome analysis model was correctly specified or mis-specified. Scenarios were designed to vary the prevalence of the intercurrent event in one or both randomised arms, the existence of a deterministic intercurrent event, the indirect effect through baseline covariates and overall treatment effect, the existence and the pattern of time-varying effect and sample size. Results show that compared with unstabilised weights, stabilisation improves the efficiency of the inverse probability of censoring weighting estimator in most cases and the improvement is obvious when we stabilise for the baseline covariates. However, stabilisation risks increasing the bias when the outcome analysis model is mis-specified.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"40-60"},"PeriodicalIF":1.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12783383/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145422794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-13DOI: 10.1177/09622802251393610
Joonha Chang, Wenyaw Chan
Continuous-time Markov chain (CTMC) models and latent classification methods are commonly used to analyze longitudinal categorical outcomes in medical research. While CTMC models are popular for their simplicity and effectiveness, their assumption of constant transition rates presents limitations in capturing dynamic behaviors. To address this, non-homogeneous continuous-time Markov chains (NH-CTMCs) have been developed, incorporating time-varying transition rates to enhance model flexibility. In this study, we leverage closed-form transition probabilities for a fully ergodic two-state NH-CTMC model and propose a latent class clustering approach to identify heterogeneous transition rate patterns within the population. We emphasize the potential advantages of these models in health sciences, particularly for longitudinal studies where transition rates vary over time and across subgroups. Additionally, we demonstrate the practical application of our model using data from an ambulatory hypertension monitoring study.
{"title":"Latent classification of time-dependent transition rates in longitudinal binary outcome data.","authors":"Joonha Chang, Wenyaw Chan","doi":"10.1177/09622802251393610","DOIUrl":"10.1177/09622802251393610","url":null,"abstract":"<p><p>Continuous-time Markov chain (CTMC) models and latent classification methods are commonly used to analyze longitudinal categorical outcomes in medical research. While CTMC models are popular for their simplicity and effectiveness, their assumption of constant transition rates presents limitations in capturing dynamic behaviors. To address this, non-homogeneous continuous-time Markov chains (NH-CTMCs) have been developed, incorporating time-varying transition rates to enhance model flexibility. In this study, we leverage closed-form transition probabilities for a fully ergodic two-state NH-CTMC model and propose a latent class clustering approach to identify heterogeneous transition rate patterns within the population. We emphasize the potential advantages of these models in health sciences, particularly for longitudinal studies where transition rates vary over time and across subgroups. Additionally, we demonstrate the practical application of our model using data from an ambulatory hypertension monitoring study.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"61-78"},"PeriodicalIF":1.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12783371/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145513961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}