Pub Date : 2025-10-01Epub Date: 2025-07-09DOI: 10.1177/09622802251356594
Juliette M Limozin, Shaun R Seaman, Li Su
Sequential trial emulation (STE) is an approach to estimating causal treatment effects by emulating a sequence of target trials from observational data. In STE, inverse probability weighting is commonly utilised to address time-varying confounding and/or dependent censoring. Then structural models for potential outcomes are applied to the weighted data to estimate treatment effects. For inference, the simple sandwich variance estimator is popular but conservative, while nonparametric bootstrap is computationally expensive, and a more efficient alternative, linearised estimating function (LEF) bootstrap, has not been adapted to STE. We evaluated the performance of various methods for constructing confidence intervals (CIs) of marginal risk differences in STE with survival outcomes by comparing the coverage of CIs based on nonparametric/LEF bootstrap, jackknife, and the sandwich variance estimator through simulations. LEF bootstrap CIs demonstrated better coverage than nonparametric bootstrap CIs and sandwich-variance-estimator-based CIs with small/moderate sample sizes, low event rates and low treatment prevalence, which were the motivating scenarios for STE. They were less affected by treatment group imbalance and faster to compute than nonparametric bootstrap CIs. With large sample sizes and medium/high event rates, the sandwich-variance-estimator-based CIs had the best coverage and were the fastest to compute. These findings offer guidance in constructing CIs in causal survival analysis using STE.
{"title":"Inference procedures in sequential trial emulation with survival outcomes: Comparing confidence intervals based on the sandwich variance estimator, bootstrap and jackknife.","authors":"Juliette M Limozin, Shaun R Seaman, Li Su","doi":"10.1177/09622802251356594","DOIUrl":"10.1177/09622802251356594","url":null,"abstract":"<p><p>Sequential trial emulation (STE) is an approach to estimating causal treatment effects by emulating a sequence of target trials from observational data. In STE, inverse probability weighting is commonly utilised to address time-varying confounding and/or dependent censoring. Then structural models for potential outcomes are applied to the weighted data to estimate treatment effects. For inference, the simple sandwich variance estimator is popular but conservative, while nonparametric bootstrap is computationally expensive, and a more efficient alternative, linearised estimating function (LEF) bootstrap, has not been adapted to STE. We evaluated the performance of various methods for constructing confidence intervals (CIs) of marginal risk differences in STE with survival outcomes by comparing the coverage of CIs based on nonparametric/LEF bootstrap, jackknife, and the sandwich variance estimator through simulations. LEF bootstrap CIs demonstrated better coverage than nonparametric bootstrap CIs and sandwich-variance-estimator-based CIs with small/moderate sample sizes, low event rates and low treatment prevalence, which were the motivating scenarios for STE. They were less affected by treatment group imbalance and faster to compute than nonparametric bootstrap CIs. With large sample sizes and medium/high event rates, the sandwich-variance-estimator-based CIs had the best coverage and were the fastest to compute. These findings offer guidance in constructing CIs in causal survival analysis using STE.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"2011-2033"},"PeriodicalIF":1.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12541114/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-13DOI: 10.1177/09622802251354925
Florence Loingeville, Manel Rakez, Thu Thuy Nguyen, Mark Donnelly, Lanyan Fang, Kevin Feng, Liang Zhao, Stella Grosser, Guoying Sun, Wanjie Sun, France Mentré, Julie Bertrand
In pharmacokinetic (PK) bioequivalence (BE) analysis, the recommended approach is the two one-sided tests (TOSTs) on non-compartmental analysis (NCA) estimates of area under the plasma drug concentration versus time curve and (NCA-TOST). Sample size estimation for a BE study requires assumptions on between/within subject variability (B/WSV). When little prior information is available, interim analysis using two-stage group sequential (GS) or adaptive designs (ADs) may be beneficial. GS fixes the second stage size, while AD requires sample re-estimation based on first-stage results. Recent research has proposed model-based (MB) TOST, using nonlinear mixed effects models, as an alternative to NCA-TOST. This work extends GS and AD approaches to MB-TOST. We evaluated these approaches on simulated parallel and two-way crossover designs for a one-compartment PK model, considering three variability levels for initial sample size calculation. We compared final sample size, type I error, and power estimates from one-stage, GS, and AD designs using NCA-TOST and MB-TOST. Results showed both NCA-TOST and MB-TOST reasonably controlled type I error while maintaining adequate power in two-stage GS and AD approaches, based on our limited computation power. Two-stage designs reduced sample size compared to traditional designs, especially for highly variable drugs, with many trials stopping at Stage 1 in AD designs. Our findings suggest MB-TOST may serve as a viable alternative to NCA-TOST for BE assessment in two-stage designs, especially when B/WSV impacts BE results.
{"title":"Model-based approach for two-stage group sequential or adaptive designs in bioequivalence studies using parallel and crossover designs.","authors":"Florence Loingeville, Manel Rakez, Thu Thuy Nguyen, Mark Donnelly, Lanyan Fang, Kevin Feng, Liang Zhao, Stella Grosser, Guoying Sun, Wanjie Sun, France Mentré, Julie Bertrand","doi":"10.1177/09622802251354925","DOIUrl":"10.1177/09622802251354925","url":null,"abstract":"<p><p>In pharmacokinetic (PK) bioequivalence (BE) analysis, the recommended approach is the two one-sided tests (TOSTs) on non-compartmental analysis (NCA) estimates of area under the plasma drug concentration versus time curve and <math><msub><mi>C</mi><mrow><mi>m</mi><mi>a</mi><mi>x</mi></mrow></msub></math> (NCA-TOST). Sample size estimation for a BE study requires assumptions on between/within subject variability (B/WSV). When little prior information is available, interim analysis using two-stage group sequential (GS) or adaptive designs (ADs) may be beneficial. GS fixes the second stage size, while AD requires sample re-estimation based on first-stage results. Recent research has proposed model-based (MB) TOST, using nonlinear mixed effects models, as an alternative to NCA-TOST. This work extends GS and AD approaches to MB-TOST. We evaluated these approaches on simulated parallel and two-way crossover designs for a one-compartment PK model, considering three variability levels for initial sample size calculation. We compared final sample size, type I error, and power estimates from one-stage, GS, and AD designs using NCA-TOST and MB-TOST. Results showed both NCA-TOST and MB-TOST reasonably controlled type I error while maintaining adequate power in two-stage GS and AD approaches, based on our limited computation power. Two-stage designs reduced sample size compared to traditional designs, especially for highly variable drugs, with many trials stopping at Stage 1 in AD designs. Our findings suggest MB-TOST may serve as a viable alternative to NCA-TOST for BE assessment in two-stage designs, especially when B/WSV impacts BE results.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1968-1981"},"PeriodicalIF":1.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144627016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-09DOI: 10.1177/09622802251338409
Yangqing Deng, John de Almeida, Wei Xu
Many randomized trials have used overall survival as the primary endpoint for establishing non-inferiority of one treatment compared to another. However, if a treatment is non-inferior to another treatment in terms of overall survival, clinicians may be interested in further exploring which treatment results in better health utility scores for patients. Examining health utility in a secondary analysis is feasible, however, since health utility is not the primary endpoint, it is usually not considered in the sample size calculation, hence the power to detect a difference of health utility is not guaranteed. Furthermore, often the premise of non-inferiority trials is to test the assumption that an intervention provides superior quality of life or toxicity profile without compromising survival when compared to the existing standard. Based on this consideration, it may be beneficial to consider both survival and utility when designing a trial. There have been methods that can combine survival and quality of life into a single measure, but they either have strong restrictions or lack theoretical frameworks. In this manuscript, we propose a method called health utility adjusted survival, which can combine survival outcome and longitudinal utility measures for treatment comparison. We propose an innovative statistical framework as well as procedures to conduct power analysis and sample size calculation. By comprehensive simulation studies involving summary statistics from the PET-NECK trial, we demonstrate that our new approach can achieve superior power performance using relatively small sample sizes, and our composite endpoint can be considered as an alternative to overall survival in future clinical trial design and analysis where both survival and health utility are of interest.
{"title":"Health utility adjusted survival: A composite endpoint for clinical trial designs.","authors":"Yangqing Deng, John de Almeida, Wei Xu","doi":"10.1177/09622802251338409","DOIUrl":"10.1177/09622802251338409","url":null,"abstract":"<p><p>Many randomized trials have used overall survival as the primary endpoint for establishing non-inferiority of one treatment compared to another. However, if a treatment is non-inferior to another treatment in terms of overall survival, clinicians may be interested in further exploring which treatment results in better health utility scores for patients. Examining health utility in a secondary analysis is feasible, however, since health utility is not the primary endpoint, it is usually not considered in the sample size calculation, hence the power to detect a difference of health utility is not guaranteed. Furthermore, often the premise of non-inferiority trials is to test the assumption that an intervention provides superior quality of life or toxicity profile without compromising survival when compared to the existing standard. Based on this consideration, it may be beneficial to consider both survival and utility when designing a trial. There have been methods that can combine survival and quality of life into a single measure, but they either have strong restrictions or lack theoretical frameworks. In this manuscript, we propose a method called health utility adjusted survival, which can combine survival outcome and longitudinal utility measures for treatment comparison. We propose an innovative statistical framework as well as procedures to conduct power analysis and sample size calculation. By comprehensive simulation studies involving summary statistics from the PET-NECK trial, we demonstrate that our new approach can achieve superior power performance using relatively small sample sizes, and our composite endpoint can be considered as an alternative to overall survival in future clinical trial design and analysis where both survival and health utility are of interest.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1920-1934"},"PeriodicalIF":1.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12541123/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-15DOI: 10.1177/09622802251367439
Xuetao Lu, J Jack Lee
The use of external data in clinical trials offers numerous advantages, such as reducing enrollment, increasing study power, and shortening trial duration. In Bayesian inference, information in external data can be transferred into an informative prior for future borrowing (i.e. prior synthesis). However, multisource external data often exhibits heterogeneity, which can cause information distortion during the prior synthesizing. Clustering helps identifying the heterogeneity, enhancing the congruence between synthesized prior and external data. Obtaining optimal clustering is challenging due to the trade-off between congruence with external data and robustness to future data. We introduce two overlapping indices: the overlapping clustering index and the overlapping evidence index . Using these indices alongside a K-means algorithm, the optimal clustering result can be identified by balancing this trade-off and applied to construct a prior synthesis framework to effectively borrow information from multisource external data. By incorporating the (robust) meta-analytic predictive (MAP) prior within this framework, we develop (robust) Bayesian clustering MAP priors. Simulation studies and real-data analysis demonstrate their advantages over commonly used priors in the presence of heterogeneity. Since the Bayesian clustering priors are constructed without needing the data from prospective study, they can be applied to both study design and data analysis in clinical trials.
{"title":"Bayesian clustering prior with overlapping indices for effective use of multisource external data.","authors":"Xuetao Lu, J Jack Lee","doi":"10.1177/09622802251367439","DOIUrl":"10.1177/09622802251367439","url":null,"abstract":"<p><p>The use of external data in clinical trials offers numerous advantages, such as reducing enrollment, increasing study power, and shortening trial duration. In Bayesian inference, information in external data can be transferred into an informative prior for future borrowing (i.e. prior synthesis). However, multisource external data often exhibits heterogeneity, which can cause information distortion during the prior synthesizing. Clustering helps identifying the heterogeneity, enhancing the congruence between synthesized prior and external data. Obtaining optimal clustering is challenging due to the trade-off between congruence with external data and robustness to future data. We introduce two overlapping indices: the overlapping clustering index and the overlapping evidence index . Using these indices alongside a K-means algorithm, the optimal clustering result can be identified by balancing this trade-off and applied to construct a prior synthesis framework to effectively borrow information from multisource external data. By incorporating the (robust) meta-analytic predictive (MAP) prior within this framework, we develop (robust) Bayesian clustering MAP priors. Simulation studies and real-data analysis demonstrate their advantages over commonly used priors in the presence of heterogeneity. Since the Bayesian clustering priors are constructed without needing the data from prospective study, they can be applied to both study design and data analysis in clinical trials.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251367439"},"PeriodicalIF":1.9,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12669405/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145070347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1177/09622802251374290
Yingjie Qiu, Mingyue Li
The integration of backfill cohorts into Phase I clinical trials has garnered increasing interest within the clinical community, particularly following the "Project Optimus" initiative by the U.S. Food and Drug Administration, as detailed in their final guidance of August 2024. This approach allows for the collection of additional clinical data to assess safety and activity before initiating trials that compare multiple dosages. For novel cancer treatments such as targeted therapies, immunotherapies, antibody-drug conjugates, and chimeric antigen receptor T-cell therapies, the efficacy of a drug may not necessarily increase with dose levels. Backfill strategies are especially beneficial as they enable the continuation of patient enrollment at lower doses while higher doses are being explored. We propose a robust Bayesian design framework that borrows information across dose levels without imposing stringent parametric assumptions on dose-response curves. This framework minimizes the risk of administering subtherapeutic doses by jointly evaluating toxicity and efficacy, and by effectively addressing the challenge of delayed outcomes. Simulation studies demonstrate that our design not only generates additional data for late stage studies but also enhances the accuracy of optimal dose selection, improves patient safety, reduces the number of patients receiving subtherapeutic doses, and shortens trial duration across various realistic trial settings.
{"title":"A robust Bayesian dose optimization design with backfill and randomization for phase I/II clinical trials.","authors":"Yingjie Qiu, Mingyue Li","doi":"10.1177/09622802251374290","DOIUrl":"10.1177/09622802251374290","url":null,"abstract":"<p><p>The integration of backfill cohorts into Phase I clinical trials has garnered increasing interest within the clinical community, particularly following the \"Project Optimus\" initiative by the U.S. Food and Drug Administration, as detailed in their final guidance of August 2024. This approach allows for the collection of additional clinical data to assess safety and activity before initiating trials that compare multiple dosages. For novel cancer treatments such as targeted therapies, immunotherapies, antibody-drug conjugates, and chimeric antigen receptor T-cell therapies, the efficacy of a drug may not necessarily increase with dose levels. Backfill strategies are especially beneficial as they enable the continuation of patient enrollment at lower doses while higher doses are being explored. We propose a robust Bayesian design framework that borrows information across dose levels without imposing stringent parametric assumptions on dose-response curves. This framework minimizes the risk of administering subtherapeutic doses by jointly evaluating toxicity and efficacy, and by effectively addressing the challenge of delayed outcomes. Simulation studies demonstrate that our design not only generates additional data for late stage studies but also enhances the accuracy of optimal dose selection, improves patient safety, reduces the number of patients receiving subtherapeutic doses, and shortens trial duration across various realistic trial settings.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251374290"},"PeriodicalIF":1.9,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12669404/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2024-11-25DOI: 10.1177/09622802241287704
Ayon Mukherjee, Sayantee Jana, Stephen Coad
Covariate-adjusted response adaptive (CARA) designs are effective in increasing the expected number of patients receiving superior treatment in an ongoing clinical trial, given a patient's covariate profile. There has recently been extensive research on CARA designs with parametric distributional assumptions on patient responses. However, the range of applications for such designs becomes limited in real clinical trials. Sverdlov et al. have pointed out that irrespective of a specific parametric form of the survival outcomes, their proposed CARA designs based on the exponential model provide valid statistical inference, provided the final analysis is performed using the appropriate accelerated failure time (AFT) model. In real survival trials, however, the planned primary analysis is rarely conducted using an AFT model. The proposed CARA designs are developed obviating any distributional assumptions about the survival responses, relying only on the proportional hazards assumption between the two treatment arms. To meet the multiple experimental objectives of a clinical trial, the proposed designs are developed based on an optimal allocation approach. The covariate-adjusted doubly adaptive biased coin design and the covariate-adjusted efficient-randomized adaptive design are used to randomize the patients to achieve the derived targets on expectation. These expected targets are functions of the Cox regression coefficients that are estimated sequentially with the arrival of every new patient into the trial. The merits of the proposed designs are assessed using extensive simulation studies of their operating characteristics and then have been implemented to re-design a real-life confirmatory clinical trial.
根据患者的协变量特征,协变量调整反应自适应(CARA)设计可有效增加正在进行的临床试验中接受优效治疗的预期患者人数。最近,对病人反应参数分布假设的 CARA 设计进行了广泛的研究。然而,在实际临床试验中,这种设计的应用范围变得十分有限。Sverdlov 等人指出,无论生存结果的具体参数形式如何,他们提出的基于指数模型的 CARA 设计都能提供有效的统计推断,前提是使用适当的加速失败时间(AFT)模型进行最终分析。然而,在实际生存试验中,计划中的主要分析很少使用 AFT 模型进行。建议的 CARA 设计在开发时避免了对生存反应的任何分布假设,仅依赖于两个治疗臂之间的比例危险假设。为了满足临床试验的多重实验目标,建议的设计是基于优化分配方法开发的。采用协变量调整的双重自适应偏倚硬币设计和协变量调整的高效随机自适应设计对患者进行随机分配,以实现推导出的预期目标。这些预期目标是 Cox 回归系数的函数,随着每名新患者进入试验而依次估算。通过对这些设计的运行特征进行广泛的模拟研究,评估了这些设计的优点,然后将其用于重新设计一项真实的确证性临床试验。
{"title":"Covariate-adjusted response-adaptive designs for semiparametric survival models.","authors":"Ayon Mukherjee, Sayantee Jana, Stephen Coad","doi":"10.1177/09622802241287704","DOIUrl":"10.1177/09622802241287704","url":null,"abstract":"<p><p>Covariate-adjusted response adaptive (CARA) designs are effective in increasing the expected number of patients receiving superior treatment in an ongoing clinical trial, given a patient's covariate profile. There has recently been extensive research on CARA designs with parametric distributional assumptions on patient responses. However, the range of applications for such designs becomes limited in real clinical trials. Sverdlov et al. have pointed out that irrespective of a specific parametric form of the survival outcomes, their proposed CARA designs based on the exponential model provide valid statistical inference, provided the final analysis is performed using the appropriate accelerated failure time (AFT) model. In real survival trials, however, the planned primary analysis is rarely conducted using an AFT model. The proposed CARA designs are developed obviating any distributional assumptions about the survival responses, relying only on the proportional hazards assumption between the two treatment arms. To meet the multiple experimental objectives of a clinical trial, the proposed designs are developed based on an optimal allocation approach. The covariate-adjusted doubly adaptive biased coin design and the covariate-adjusted efficient-randomized adaptive design are used to randomize the patients to achieve the derived targets on expectation. These expected targets are functions of the Cox regression coefficients that are estimated sequentially with the arrival of every new patient into the trial. The merits of the proposed designs are assessed using extensive simulation studies of their operating characteristics and then have been implemented to re-design a real-life confirmatory clinical trial.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1697-1723"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2024-12-12DOI: 10.1177/09622802241293750
Yanqing Yi, Xikui Wang
We investigate the optimal allocation design for response adaptive clinical trials, under the average reward criterion. The treatment randomization process is formatted as a Markov decision process and the Bayesian method is used to summarize the information on treatment effects. A span-contraction operator is introduced and the average reward generated by the policy identified by the operator is shown to converge to the optimal value. We propose an algorithm to approximate the optimal treatment allocation using the Thompson sampling and the contraction operator. For the scenario of two treatments with binary responses and a sample size of 200 patients, simulation results demonstrate efficient learning features of the proposed method. It allocates a high proportion of patients to the better treatment while retaining a good statistical power and having a small probability for a trial going in the undesired direction. When the difference in success probability to detect is 0.2, the probability for a trial going in the unfavorable direction is < 1.5%, which decreases further to < 0.9% when the difference to detect is 0.3. For normally distribution responses, with a sample size of 100 patients, the proposed method assigns 13% more patients to the better treatment than the traditional complete randomization in detecting an effect size of difference 0.8, with a good statistical power and a < 0.7% probability for the trial to go in the undesired direction.
{"title":"Approximation to the optimal allocation for response adaptive designs.","authors":"Yanqing Yi, Xikui Wang","doi":"10.1177/09622802241293750","DOIUrl":"10.1177/09622802241293750","url":null,"abstract":"<p><p>We investigate the optimal allocation design for response adaptive clinical trials, under the average reward criterion. The treatment randomization process is formatted as a Markov decision process and the Bayesian method is used to summarize the information on treatment effects. A span-contraction operator is introduced and the average reward generated by the policy identified by the operator is shown to converge to the optimal value. We propose an algorithm to approximate the optimal treatment allocation using the Thompson sampling and the contraction operator. For the scenario of two treatments with binary responses and a sample size of 200 patients, simulation results demonstrate efficient learning features of the proposed method. It allocates a high proportion of patients to the better treatment while retaining a good statistical power and having a small probability for a trial going in the undesired direction. When the difference in success probability to detect is 0.2, the probability for a trial going in the unfavorable direction is < 1.5%, which decreases further to < 0.9% when the difference to detect is 0.3. For normally distribution responses, with a sample size of 100 patients, the proposed method assigns 13% more patients to the better treatment than the traditional complete randomization in detecting an effect size of difference 0.8, with a good statistical power and a < 0.7% probability for the trial to go in the undesired direction.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1724-1731"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1177/09622802251367443
Angela Carollo, Hein Putter, Paul Hc Eilers, Jutta Gampe
Competing risks models can involve more than one time scale. A relevant example is the study of mortality after a cancer diagnosis, where time since diagnosis but also age may jointly determine the hazards of death due to different causes. Multiple time scales have rarely been explored in the context of competing events. Here, we propose a model in which the cause-specific hazards vary smoothly over two times scales. It is estimated by two-dimensional -splines, exploiting the equivalence between hazard smoothing and Poisson regression. The data are arranged on a grid so that we can make use of generalised linear array models for efficient computations. The R-package TwoTimeScales implements the model. As a motivating example we analyse mortality after diagnosis of breast cancer and we distinguish between death due to breast cancer and all other causes of death. The time scales are age and time since diagnosis. We use data from the Surveillance, Epidemiology and End Results (SEER) program. In the SEER data, age at diagnosis is provided with a last open-ended category, leading to coarsely grouped data. We use the two-dimensional penalised composite link model to ungroup the data before applying the competing risks model with two time scales.
{"title":"Competing risks models with two time scales.","authors":"Angela Carollo, Hein Putter, Paul Hc Eilers, Jutta Gampe","doi":"10.1177/09622802251367443","DOIUrl":"10.1177/09622802251367443","url":null,"abstract":"<p><p>Competing risks models can involve more than one time scale. A relevant example is the study of mortality after a cancer diagnosis, where time since diagnosis but also age may jointly determine the hazards of death due to different causes. Multiple time scales have rarely been explored in the context of competing events. Here, we propose a model in which the cause-specific hazards vary smoothly over two times scales. It is estimated by two-dimensional <math><mi>P</mi></math>-splines, exploiting the equivalence between hazard smoothing and Poisson regression. The data are arranged on a grid so that we can make use of generalised linear array models for efficient computations. The R-package TwoTimeScales implements the model. As a motivating example we analyse mortality after diagnosis of breast cancer and we distinguish between death due to breast cancer and all other causes of death. The time scales are age and time since diagnosis. We use data from the Surveillance, Epidemiology and End Results (SEER) program. In the SEER data, age at diagnosis is provided with a last open-ended category, leading to coarsely grouped data. We use the two-dimensional penalised composite link model to ungroup the data before applying the competing risks model with two time scales.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251367443"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12669410/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144969832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-04-13DOI: 10.1177/09622802241313283
Ziqing Guo, Yang Liu, Lucy Xia
Balancing influential covariates is crucial for valid treatment comparisons in clinical studies. While covariate-adaptive randomization is commonly used to achieve balance, its performance can be inadequate when the number of baseline covariates is large. It is, therefore, essential to identify the influential factors associated with the outcome and ensure balance among these critical covariates. In this article, we propose a novel adaptive randomization approach that integrates the patients' responses and covariates information to select sequentially significant covariates and maintain their balance. We establish theoretically the consistency of our covariate selection method and demonstrate that the improved covariate balancing, as evidenced by a faster convergence rate of the imbalance measure, leads to higher efficiency in estimating treatment effects. Furthermore, we provide extensive numerical and empirical studies to illustrate the benefits of our proposed method across various settings.
{"title":"Covariate selection for optimizing balance with an innovative adaptive randomization approach.","authors":"Ziqing Guo, Yang Liu, Lucy Xia","doi":"10.1177/09622802241313283","DOIUrl":"10.1177/09622802241313283","url":null,"abstract":"<p><p>Balancing influential covariates is crucial for valid treatment comparisons in clinical studies. While covariate-adaptive randomization is commonly used to achieve balance, its performance can be inadequate when the number of baseline covariates is large. It is, therefore, essential to identify the influential factors associated with the outcome and ensure balance among these critical covariates. In this article, we propose a novel adaptive randomization approach that integrates the patients' responses and covariates information to select sequentially significant covariates and maintain their balance. We establish theoretically the consistency of our covariate selection method and demonstrate that the improved covariate balancing, as evidenced by a faster convergence rate of the imbalance measure, leads to higher efficiency in estimating treatment effects. Furthermore, we provide extensive numerical and empirical studies to illustrate the benefits of our proposed method across various settings.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1751-1779"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144035504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-07-24DOI: 10.1177/09622802251354924
Oleksandr Sverdlov, Jone Renteria, Kerstine Carter, Annika L Scheffold, Johannes Krisam, Pietro Mascheroni, Jan Seidel
Background: There is emerging evidence of the increasing uptake of response-adaptive randomization (RAR) in clinical trials. However, a systematic review of RAR trials, their context of use, characteristics, and stakeholder acceptance has been lacking. Methods: We performed a systematic review of clinical trials that utilized elements of RAR, identified via the Cortellis Regulatory Intelligence database following a pre-specified selection process. We report a summary of relevant characteristics of the identified trials. Results: Out of 170 records, 39 RAR trials were identified (22 completed, 17 ongoing as of October 2024). The majority were Phase 2-focused studies (phases 1/2, 2, 2b, and 2/3), academically sponsored, and concentrated in oncology, neurology, and infectious diseases. Small molecules and biologics were the most common investigational products. Among the 22 completed trials, seven reported positive outcomes. Notably, two of these trials provided pivotal data that informed the further development and subsequent regulatory approval of the investigational compounds. Conclusion: Over the past two decades, RAR has been increasingly utilized in complex adaptive trials across diverse therapeutic areas and clinical research phases. This systematic review provides a critical "baseline" for tracing the dynamics of RAR applications and should help the clinical research community recognize RAR as a valuable methodology for optimizing future trial designs.
{"title":"To what extent is response-adaptive randomization used in clinical trials? A systematic review using Cortellis Regulatory Intelligence database.","authors":"Oleksandr Sverdlov, Jone Renteria, Kerstine Carter, Annika L Scheffold, Johannes Krisam, Pietro Mascheroni, Jan Seidel","doi":"10.1177/09622802251354924","DOIUrl":"10.1177/09622802251354924","url":null,"abstract":"<p><p><b>Background:</b> There is emerging evidence of the increasing uptake of response-adaptive randomization (RAR) in clinical trials. However, a systematic review of RAR trials, their context of use, characteristics, and stakeholder acceptance has been lacking. <b>Methods:</b> We performed a systematic review of clinical trials that utilized elements of RAR, identified via the Cortellis Regulatory Intelligence database following a pre-specified selection process. We report a summary of relevant characteristics of the identified trials. <b>Results:</b> Out of 170 records, 39 RAR trials were identified (22 completed, 17 ongoing as of October 2024). The majority were Phase 2-focused studies (phases 1/2, 2, 2b, and 2/3), academically sponsored, and concentrated in oncology, neurology, and infectious diseases. Small molecules and biologics were the most common investigational products. Among the 22 completed trials, seven reported positive outcomes. Notably, two of these trials provided pivotal data that informed the further development and subsequent regulatory approval of the investigational compounds. <b>Conclusion:</b> Over the past two decades, RAR has been increasingly utilized in complex adaptive trials across diverse therapeutic areas and clinical research phases. This systematic review provides a critical \"baseline\" for tracing the dynamics of RAR applications and should help the clinical research community recognize RAR as a valuable methodology for optimizing future trial designs.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1875-1885"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144699567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}