Pub Date : 2025-10-01Epub Date: 2025-05-21DOI: 10.1177/09622802251327680
Ornella Moro, Inger Torhild Gram, Maja-Lisa Løchen, Marit B Veierød, Ana Maria Wägner, Giovanni Sebastiani
Future occurrence of a disease can be highly influenced by some specific risk factors. This work presents a comprehensive approach to quantify the event probability as a function of each separate risk factor by means of a parametric model. The proposed methodology is mainly described and applied here in the case of a linear model, but the non-linear case is also addressed. To improve estimation accuracy, three distinct methods are developed and their results are integrated. One of them is Bayesian, based on a non-informative prior. Each of the other two, uses aggregation of sample elements based on their factor values, which is optimized by means of a different specific criterion. For one of these two, optimization is performed by Simulated Annealing. The methodology presented is applicable across various diseases but here we quantify the risk for cardiovascular diseases in subjects with type 1 diabetes. The results obtained combining the three different methods show accurate estimates of cardiovascular risk variation rates for the factors considered. Furthermore, the detection of a biological activation phenomenon for one of the factors is also illustrated. To quantify the performances of the proposed methodology and to compare them with those from a known method used for this type of models, a large simulation study is done, whose results are illustrated here.
{"title":"Quantification of the influence of risk factors with application to cardiovascular diseases in subjects with type 1 diabetes.","authors":"Ornella Moro, Inger Torhild Gram, Maja-Lisa Løchen, Marit B Veierød, Ana Maria Wägner, Giovanni Sebastiani","doi":"10.1177/09622802251327680","DOIUrl":"10.1177/09622802251327680","url":null,"abstract":"<p><p>Future occurrence of a disease can be highly influenced by some specific risk factors. This work presents a comprehensive approach to quantify the event probability as a function of each separate risk factor by means of a parametric model. The proposed methodology is mainly described and applied here in the case of a linear model, but the non-linear case is also addressed. To improve estimation accuracy, three distinct methods are developed and their results are integrated. One of them is Bayesian, based on a non-informative prior. Each of the other two, uses aggregation of sample elements based on their factor values, which is optimized by means of a different specific criterion. For one of these two, optimization is performed by Simulated Annealing. The methodology presented is applicable across various diseases but here we quantify the risk for cardiovascular diseases in subjects with type 1 diabetes. The results obtained combining the three different methods show accurate estimates of cardiovascular risk variation rates for the factors considered. Furthermore, the detection of a biological activation phenomenon for one of the factors is also illustrated. To quantify the performances of the proposed methodology and to compare them with those from a known method used for this type of models, a large simulation study is done, whose results are illustrated here.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1901-1919"},"PeriodicalIF":1.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144111965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-08DOI: 10.1177/09622802251362659
Garazi Retegui, Alan E Gelfand, Jaione Etxeberria, María Dolores Ugarte
Disease mapping attempts to explain observed health event counts across areal units, typically using Markov random field models. These models rely on spatial priors to account for variation in raw relative risk or rate estimates. Spatial priors introduce some degree of smoothing, wherein, for any particular unit, empirical risk or incidence estimates are either adjusted towards a suitable mean or incorporate neighbor-based smoothing. While model explanation may be the primary focus, the literature lacks a comparison of the amount of smoothing introduced by different spatial priors. Additionally, there has been no investigation into how varying the parameters of these priors influences the resulting smoothing. This study examines seven commonly used spatial priors through both simulations and real data analyses. Using areal maps of peninsular Spain and England, we analyze smoothing effects using two datasets with associated populations at risk. We propose empirical metrics to quantify the smoothing achieved by each model and theoretical metrics to calibrate the expected extent of smoothing as a function of model parameters. We employ areal maps in order to quantitatively characterize the extent of smoothing within and across the models as well as to link the theoretical metrics to the empirical metrics.
{"title":"On prior smoothing with discrete spatial data in the context of disease mapping.","authors":"Garazi Retegui, Alan E Gelfand, Jaione Etxeberria, María Dolores Ugarte","doi":"10.1177/09622802251362659","DOIUrl":"10.1177/09622802251362659","url":null,"abstract":"<p><p>Disease mapping attempts to explain observed health event counts across areal units, typically using Markov random field models. These models rely on spatial priors to account for variation in raw relative risk or rate estimates. Spatial priors introduce some degree of smoothing, wherein, for any particular unit, empirical risk or incidence estimates are either adjusted towards a suitable mean or incorporate neighbor-based smoothing. While model explanation may be the primary focus, the literature lacks a comparison of the amount of smoothing introduced by different spatial priors. Additionally, there has been no investigation into how varying the parameters of these priors influences the resulting smoothing. This study examines seven commonly used spatial priors through both simulations and real data analyses. Using areal maps of peninsular Spain and England, we analyze smoothing effects using two datasets with associated populations at risk. We propose empirical metrics to quantify the smoothing achieved by each model and theoretical metrics to calibrate the expected extent of smoothing as a function of model parameters. We employ areal maps in order to quantitatively characterize the extent of smoothing within and across the models as well as to link the theoretical metrics to the empirical metrics.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"2091-2107"},"PeriodicalIF":1.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144800318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-09DOI: 10.1177/09622802251356594
Juliette M Limozin, Shaun R Seaman, Li Su
Sequential trial emulation (STE) is an approach to estimating causal treatment effects by emulating a sequence of target trials from observational data. In STE, inverse probability weighting is commonly utilised to address time-varying confounding and/or dependent censoring. Then structural models for potential outcomes are applied to the weighted data to estimate treatment effects. For inference, the simple sandwich variance estimator is popular but conservative, while nonparametric bootstrap is computationally expensive, and a more efficient alternative, linearised estimating function (LEF) bootstrap, has not been adapted to STE. We evaluated the performance of various methods for constructing confidence intervals (CIs) of marginal risk differences in STE with survival outcomes by comparing the coverage of CIs based on nonparametric/LEF bootstrap, jackknife, and the sandwich variance estimator through simulations. LEF bootstrap CIs demonstrated better coverage than nonparametric bootstrap CIs and sandwich-variance-estimator-based CIs with small/moderate sample sizes, low event rates and low treatment prevalence, which were the motivating scenarios for STE. They were less affected by treatment group imbalance and faster to compute than nonparametric bootstrap CIs. With large sample sizes and medium/high event rates, the sandwich-variance-estimator-based CIs had the best coverage and were the fastest to compute. These findings offer guidance in constructing CIs in causal survival analysis using STE.
{"title":"Inference procedures in sequential trial emulation with survival outcomes: Comparing confidence intervals based on the sandwich variance estimator, bootstrap and jackknife.","authors":"Juliette M Limozin, Shaun R Seaman, Li Su","doi":"10.1177/09622802251356594","DOIUrl":"10.1177/09622802251356594","url":null,"abstract":"<p><p>Sequential trial emulation (STE) is an approach to estimating causal treatment effects by emulating a sequence of target trials from observational data. In STE, inverse probability weighting is commonly utilised to address time-varying confounding and/or dependent censoring. Then structural models for potential outcomes are applied to the weighted data to estimate treatment effects. For inference, the simple sandwich variance estimator is popular but conservative, while nonparametric bootstrap is computationally expensive, and a more efficient alternative, linearised estimating function (LEF) bootstrap, has not been adapted to STE. We evaluated the performance of various methods for constructing confidence intervals (CIs) of marginal risk differences in STE with survival outcomes by comparing the coverage of CIs based on nonparametric/LEF bootstrap, jackknife, and the sandwich variance estimator through simulations. LEF bootstrap CIs demonstrated better coverage than nonparametric bootstrap CIs and sandwich-variance-estimator-based CIs with small/moderate sample sizes, low event rates and low treatment prevalence, which were the motivating scenarios for STE. They were less affected by treatment group imbalance and faster to compute than nonparametric bootstrap CIs. With large sample sizes and medium/high event rates, the sandwich-variance-estimator-based CIs had the best coverage and were the fastest to compute. These findings offer guidance in constructing CIs in causal survival analysis using STE.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"2011-2033"},"PeriodicalIF":1.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12541114/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-17DOI: 10.1177/09622802251357021
Junhan Fang, Donna Spiegelman, Ashley L Buchanan, Laura Forastiere
Many public health interventions are conducted in settings where individuals are connected and the intervention assigned to some individuals may spill over to other individuals. In these settings, we can assess: (a) the individual effect on the treated, (b) the spillover effect on untreated individuals through an indirect exposure to the intervention, and (c) the overall effect on the whole population. Here, we consider an egocentric network-based randomized design in which a set of index participants is recruited and randomly assigned to treatment, while data are also collected on their untreated network members. Such a design is common in peer education interventions conceived to leverage behavioral influence among peers. Using the potential outcomes framework, we first clarify the assumptions required to rely on an identification strategy that is commonly used in the well-studied two-stage randomized design. Under these assumptions, causal effects can be jointly estimated using a regression model with a block-diagonal structure. We then develop sample size formulas for detecting individual, spillover, and overall effects for single and joint hypothesis tests, and investigate the role of different parameters. Finally, we illustrate the use of our sample size formulas for an egocentric network-based randomized experiment to evaluate a peer education intervention for HIV prevention.
{"title":"Design of egocentric network-based studies to estimate causal effects under interference.","authors":"Junhan Fang, Donna Spiegelman, Ashley L Buchanan, Laura Forastiere","doi":"10.1177/09622802251357021","DOIUrl":"10.1177/09622802251357021","url":null,"abstract":"<p><p>Many public health interventions are conducted in settings where individuals are connected and the intervention assigned to some individuals may spill over to other individuals. In these settings, we can assess: (a) the individual effect on the treated, (b) the spillover effect on untreated individuals through an indirect exposure to the intervention, and (c) the overall effect on the whole population. Here, we consider an egocentric network-based randomized design in which a set of index participants is recruited and randomly assigned to treatment, while data are also collected on their untreated network members. Such a design is common in peer education interventions conceived to leverage behavioral influence among peers. Using the potential outcomes framework, we first clarify the assumptions required to rely on an identification strategy that is commonly used in the well-studied two-stage randomized design. Under these assumptions, causal effects can be jointly estimated using a regression model with a block-diagonal structure. We then develop sample size formulas for detecting individual, spillover, and overall effects for single and joint hypothesis tests, and investigate the role of different parameters. Finally, we illustrate the use of our sample size formulas for an egocentric network-based randomized experiment to evaluate a peer education intervention for HIV prevention.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"2034-2052"},"PeriodicalIF":1.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12853655/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144650605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-13DOI: 10.1177/09622802251354925
Florence Loingeville, Manel Rakez, Thu Thuy Nguyen, Mark Donnelly, Lanyan Fang, Kevin Feng, Liang Zhao, Stella Grosser, Guoying Sun, Wanjie Sun, France Mentré, Julie Bertrand
In pharmacokinetic (PK) bioequivalence (BE) analysis, the recommended approach is the two one-sided tests (TOSTs) on non-compartmental analysis (NCA) estimates of area under the plasma drug concentration versus time curve and (NCA-TOST). Sample size estimation for a BE study requires assumptions on between/within subject variability (B/WSV). When little prior information is available, interim analysis using two-stage group sequential (GS) or adaptive designs (ADs) may be beneficial. GS fixes the second stage size, while AD requires sample re-estimation based on first-stage results. Recent research has proposed model-based (MB) TOST, using nonlinear mixed effects models, as an alternative to NCA-TOST. This work extends GS and AD approaches to MB-TOST. We evaluated these approaches on simulated parallel and two-way crossover designs for a one-compartment PK model, considering three variability levels for initial sample size calculation. We compared final sample size, type I error, and power estimates from one-stage, GS, and AD designs using NCA-TOST and MB-TOST. Results showed both NCA-TOST and MB-TOST reasonably controlled type I error while maintaining adequate power in two-stage GS and AD approaches, based on our limited computation power. Two-stage designs reduced sample size compared to traditional designs, especially for highly variable drugs, with many trials stopping at Stage 1 in AD designs. Our findings suggest MB-TOST may serve as a viable alternative to NCA-TOST for BE assessment in two-stage designs, especially when B/WSV impacts BE results.
{"title":"Model-based approach for two-stage group sequential or adaptive designs in bioequivalence studies using parallel and crossover designs.","authors":"Florence Loingeville, Manel Rakez, Thu Thuy Nguyen, Mark Donnelly, Lanyan Fang, Kevin Feng, Liang Zhao, Stella Grosser, Guoying Sun, Wanjie Sun, France Mentré, Julie Bertrand","doi":"10.1177/09622802251354925","DOIUrl":"10.1177/09622802251354925","url":null,"abstract":"<p><p>In pharmacokinetic (PK) bioequivalence (BE) analysis, the recommended approach is the two one-sided tests (TOSTs) on non-compartmental analysis (NCA) estimates of area under the plasma drug concentration versus time curve and <math><msub><mi>C</mi><mrow><mi>m</mi><mi>a</mi><mi>x</mi></mrow></msub></math> (NCA-TOST). Sample size estimation for a BE study requires assumptions on between/within subject variability (B/WSV). When little prior information is available, interim analysis using two-stage group sequential (GS) or adaptive designs (ADs) may be beneficial. GS fixes the second stage size, while AD requires sample re-estimation based on first-stage results. Recent research has proposed model-based (MB) TOST, using nonlinear mixed effects models, as an alternative to NCA-TOST. This work extends GS and AD approaches to MB-TOST. We evaluated these approaches on simulated parallel and two-way crossover designs for a one-compartment PK model, considering three variability levels for initial sample size calculation. We compared final sample size, type I error, and power estimates from one-stage, GS, and AD designs using NCA-TOST and MB-TOST. Results showed both NCA-TOST and MB-TOST reasonably controlled type I error while maintaining adequate power in two-stage GS and AD approaches, based on our limited computation power. Two-stage designs reduced sample size compared to traditional designs, especially for highly variable drugs, with many trials stopping at Stage 1 in AD designs. Our findings suggest MB-TOST may serve as a viable alternative to NCA-TOST for BE assessment in two-stage designs, especially when B/WSV impacts BE results.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1968-1981"},"PeriodicalIF":1.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144627016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-09DOI: 10.1177/09622802251338409
Yangqing Deng, John de Almeida, Wei Xu
Many randomized trials have used overall survival as the primary endpoint for establishing non-inferiority of one treatment compared to another. However, if a treatment is non-inferior to another treatment in terms of overall survival, clinicians may be interested in further exploring which treatment results in better health utility scores for patients. Examining health utility in a secondary analysis is feasible, however, since health utility is not the primary endpoint, it is usually not considered in the sample size calculation, hence the power to detect a difference of health utility is not guaranteed. Furthermore, often the premise of non-inferiority trials is to test the assumption that an intervention provides superior quality of life or toxicity profile without compromising survival when compared to the existing standard. Based on this consideration, it may be beneficial to consider both survival and utility when designing a trial. There have been methods that can combine survival and quality of life into a single measure, but they either have strong restrictions or lack theoretical frameworks. In this manuscript, we propose a method called health utility adjusted survival, which can combine survival outcome and longitudinal utility measures for treatment comparison. We propose an innovative statistical framework as well as procedures to conduct power analysis and sample size calculation. By comprehensive simulation studies involving summary statistics from the PET-NECK trial, we demonstrate that our new approach can achieve superior power performance using relatively small sample sizes, and our composite endpoint can be considered as an alternative to overall survival in future clinical trial design and analysis where both survival and health utility are of interest.
{"title":"Health utility adjusted survival: A composite endpoint for clinical trial designs.","authors":"Yangqing Deng, John de Almeida, Wei Xu","doi":"10.1177/09622802251338409","DOIUrl":"10.1177/09622802251338409","url":null,"abstract":"<p><p>Many randomized trials have used overall survival as the primary endpoint for establishing non-inferiority of one treatment compared to another. However, if a treatment is non-inferior to another treatment in terms of overall survival, clinicians may be interested in further exploring which treatment results in better health utility scores for patients. Examining health utility in a secondary analysis is feasible, however, since health utility is not the primary endpoint, it is usually not considered in the sample size calculation, hence the power to detect a difference of health utility is not guaranteed. Furthermore, often the premise of non-inferiority trials is to test the assumption that an intervention provides superior quality of life or toxicity profile without compromising survival when compared to the existing standard. Based on this consideration, it may be beneficial to consider both survival and utility when designing a trial. There have been methods that can combine survival and quality of life into a single measure, but they either have strong restrictions or lack theoretical frameworks. In this manuscript, we propose a method called health utility adjusted survival, which can combine survival outcome and longitudinal utility measures for treatment comparison. We propose an innovative statistical framework as well as procedures to conduct power analysis and sample size calculation. By comprehensive simulation studies involving summary statistics from the PET-NECK trial, we demonstrate that our new approach can achieve superior power performance using relatively small sample sizes, and our composite endpoint can be considered as an alternative to overall survival in future clinical trial design and analysis where both survival and health utility are of interest.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1920-1934"},"PeriodicalIF":1.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12541123/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-15DOI: 10.1177/09622802251367439
Xuetao Lu, J Jack Lee
The use of external data in clinical trials offers numerous advantages, such as reducing enrollment, increasing study power, and shortening trial duration. In Bayesian inference, information in external data can be transferred into an informative prior for future borrowing (i.e. prior synthesis). However, multisource external data often exhibits heterogeneity, which can cause information distortion during the prior synthesizing. Clustering helps identifying the heterogeneity, enhancing the congruence between synthesized prior and external data. Obtaining optimal clustering is challenging due to the trade-off between congruence with external data and robustness to future data. We introduce two overlapping indices: the overlapping clustering index and the overlapping evidence index . Using these indices alongside a K-means algorithm, the optimal clustering result can be identified by balancing this trade-off and applied to construct a prior synthesis framework to effectively borrow information from multisource external data. By incorporating the (robust) meta-analytic predictive (MAP) prior within this framework, we develop (robust) Bayesian clustering MAP priors. Simulation studies and real-data analysis demonstrate their advantages over commonly used priors in the presence of heterogeneity. Since the Bayesian clustering priors are constructed without needing the data from prospective study, they can be applied to both study design and data analysis in clinical trials.
{"title":"Bayesian clustering prior with overlapping indices for effective use of multisource external data.","authors":"Xuetao Lu, J Jack Lee","doi":"10.1177/09622802251367439","DOIUrl":"10.1177/09622802251367439","url":null,"abstract":"<p><p>The use of external data in clinical trials offers numerous advantages, such as reducing enrollment, increasing study power, and shortening trial duration. In Bayesian inference, information in external data can be transferred into an informative prior for future borrowing (i.e. prior synthesis). However, multisource external data often exhibits heterogeneity, which can cause information distortion during the prior synthesizing. Clustering helps identifying the heterogeneity, enhancing the congruence between synthesized prior and external data. Obtaining optimal clustering is challenging due to the trade-off between congruence with external data and robustness to future data. We introduce two overlapping indices: the overlapping clustering index and the overlapping evidence index . Using these indices alongside a K-means algorithm, the optimal clustering result can be identified by balancing this trade-off and applied to construct a prior synthesis framework to effectively borrow information from multisource external data. By incorporating the (robust) meta-analytic predictive (MAP) prior within this framework, we develop (robust) Bayesian clustering MAP priors. Simulation studies and real-data analysis demonstrate their advantages over commonly used priors in the presence of heterogeneity. Since the Bayesian clustering priors are constructed without needing the data from prospective study, they can be applied to both study design and data analysis in clinical trials.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251367439"},"PeriodicalIF":1.9,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12669405/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145070347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1177/09622802251374290
Yingjie Qiu, Mingyue Li
The integration of backfill cohorts into Phase I clinical trials has garnered increasing interest within the clinical community, particularly following the "Project Optimus" initiative by the U.S. Food and Drug Administration, as detailed in their final guidance of August 2024. This approach allows for the collection of additional clinical data to assess safety and activity before initiating trials that compare multiple dosages. For novel cancer treatments such as targeted therapies, immunotherapies, antibody-drug conjugates, and chimeric antigen receptor T-cell therapies, the efficacy of a drug may not necessarily increase with dose levels. Backfill strategies are especially beneficial as they enable the continuation of patient enrollment at lower doses while higher doses are being explored. We propose a robust Bayesian design framework that borrows information across dose levels without imposing stringent parametric assumptions on dose-response curves. This framework minimizes the risk of administering subtherapeutic doses by jointly evaluating toxicity and efficacy, and by effectively addressing the challenge of delayed outcomes. Simulation studies demonstrate that our design not only generates additional data for late stage studies but also enhances the accuracy of optimal dose selection, improves patient safety, reduces the number of patients receiving subtherapeutic doses, and shortens trial duration across various realistic trial settings.
{"title":"A robust Bayesian dose optimization design with backfill and randomization for phase I/II clinical trials.","authors":"Yingjie Qiu, Mingyue Li","doi":"10.1177/09622802251374290","DOIUrl":"10.1177/09622802251374290","url":null,"abstract":"<p><p>The integration of backfill cohorts into Phase I clinical trials has garnered increasing interest within the clinical community, particularly following the \"Project Optimus\" initiative by the U.S. Food and Drug Administration, as detailed in their final guidance of August 2024. This approach allows for the collection of additional clinical data to assess safety and activity before initiating trials that compare multiple dosages. For novel cancer treatments such as targeted therapies, immunotherapies, antibody-drug conjugates, and chimeric antigen receptor T-cell therapies, the efficacy of a drug may not necessarily increase with dose levels. Backfill strategies are especially beneficial as they enable the continuation of patient enrollment at lower doses while higher doses are being explored. We propose a robust Bayesian design framework that borrows information across dose levels without imposing stringent parametric assumptions on dose-response curves. This framework minimizes the risk of administering subtherapeutic doses by jointly evaluating toxicity and efficacy, and by effectively addressing the challenge of delayed outcomes. Simulation studies demonstrate that our design not only generates additional data for late stage studies but also enhances the accuracy of optimal dose selection, improves patient safety, reduces the number of patients receiving subtherapeutic doses, and shortens trial duration across various realistic trial settings.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251374290"},"PeriodicalIF":1.9,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12669404/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2024-11-25DOI: 10.1177/09622802241287704
Ayon Mukherjee, Sayantee Jana, Stephen Coad
Covariate-adjusted response adaptive (CARA) designs are effective in increasing the expected number of patients receiving superior treatment in an ongoing clinical trial, given a patient's covariate profile. There has recently been extensive research on CARA designs with parametric distributional assumptions on patient responses. However, the range of applications for such designs becomes limited in real clinical trials. Sverdlov et al. have pointed out that irrespective of a specific parametric form of the survival outcomes, their proposed CARA designs based on the exponential model provide valid statistical inference, provided the final analysis is performed using the appropriate accelerated failure time (AFT) model. In real survival trials, however, the planned primary analysis is rarely conducted using an AFT model. The proposed CARA designs are developed obviating any distributional assumptions about the survival responses, relying only on the proportional hazards assumption between the two treatment arms. To meet the multiple experimental objectives of a clinical trial, the proposed designs are developed based on an optimal allocation approach. The covariate-adjusted doubly adaptive biased coin design and the covariate-adjusted efficient-randomized adaptive design are used to randomize the patients to achieve the derived targets on expectation. These expected targets are functions of the Cox regression coefficients that are estimated sequentially with the arrival of every new patient into the trial. The merits of the proposed designs are assessed using extensive simulation studies of their operating characteristics and then have been implemented to re-design a real-life confirmatory clinical trial.
根据患者的协变量特征,协变量调整反应自适应(CARA)设计可有效增加正在进行的临床试验中接受优效治疗的预期患者人数。最近,对病人反应参数分布假设的 CARA 设计进行了广泛的研究。然而,在实际临床试验中,这种设计的应用范围变得十分有限。Sverdlov 等人指出,无论生存结果的具体参数形式如何,他们提出的基于指数模型的 CARA 设计都能提供有效的统计推断,前提是使用适当的加速失败时间(AFT)模型进行最终分析。然而,在实际生存试验中,计划中的主要分析很少使用 AFT 模型进行。建议的 CARA 设计在开发时避免了对生存反应的任何分布假设,仅依赖于两个治疗臂之间的比例危险假设。为了满足临床试验的多重实验目标,建议的设计是基于优化分配方法开发的。采用协变量调整的双重自适应偏倚硬币设计和协变量调整的高效随机自适应设计对患者进行随机分配,以实现推导出的预期目标。这些预期目标是 Cox 回归系数的函数,随着每名新患者进入试验而依次估算。通过对这些设计的运行特征进行广泛的模拟研究,评估了这些设计的优点,然后将其用于重新设计一项真实的确证性临床试验。
{"title":"Covariate-adjusted response-adaptive designs for semiparametric survival models.","authors":"Ayon Mukherjee, Sayantee Jana, Stephen Coad","doi":"10.1177/09622802241287704","DOIUrl":"10.1177/09622802241287704","url":null,"abstract":"<p><p>Covariate-adjusted response adaptive (CARA) designs are effective in increasing the expected number of patients receiving superior treatment in an ongoing clinical trial, given a patient's covariate profile. There has recently been extensive research on CARA designs with parametric distributional assumptions on patient responses. However, the range of applications for such designs becomes limited in real clinical trials. Sverdlov et al. have pointed out that irrespective of a specific parametric form of the survival outcomes, their proposed CARA designs based on the exponential model provide valid statistical inference, provided the final analysis is performed using the appropriate accelerated failure time (AFT) model. In real survival trials, however, the planned primary analysis is rarely conducted using an AFT model. The proposed CARA designs are developed obviating any distributional assumptions about the survival responses, relying only on the proportional hazards assumption between the two treatment arms. To meet the multiple experimental objectives of a clinical trial, the proposed designs are developed based on an optimal allocation approach. The covariate-adjusted doubly adaptive biased coin design and the covariate-adjusted efficient-randomized adaptive design are used to randomize the patients to achieve the derived targets on expectation. These expected targets are functions of the Cox regression coefficients that are estimated sequentially with the arrival of every new patient into the trial. The merits of the proposed designs are assessed using extensive simulation studies of their operating characteristics and then have been implemented to re-design a real-life confirmatory clinical trial.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1697-1723"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2024-12-12DOI: 10.1177/09622802241293750
Yanqing Yi, Xikui Wang
We investigate the optimal allocation design for response adaptive clinical trials, under the average reward criterion. The treatment randomization process is formatted as a Markov decision process and the Bayesian method is used to summarize the information on treatment effects. A span-contraction operator is introduced and the average reward generated by the policy identified by the operator is shown to converge to the optimal value. We propose an algorithm to approximate the optimal treatment allocation using the Thompson sampling and the contraction operator. For the scenario of two treatments with binary responses and a sample size of 200 patients, simulation results demonstrate efficient learning features of the proposed method. It allocates a high proportion of patients to the better treatment while retaining a good statistical power and having a small probability for a trial going in the undesired direction. When the difference in success probability to detect is 0.2, the probability for a trial going in the unfavorable direction is < 1.5%, which decreases further to < 0.9% when the difference to detect is 0.3. For normally distribution responses, with a sample size of 100 patients, the proposed method assigns 13% more patients to the better treatment than the traditional complete randomization in detecting an effect size of difference 0.8, with a good statistical power and a < 0.7% probability for the trial to go in the undesired direction.
{"title":"Approximation to the optimal allocation for response adaptive designs.","authors":"Yanqing Yi, Xikui Wang","doi":"10.1177/09622802241293750","DOIUrl":"10.1177/09622802241293750","url":null,"abstract":"<p><p>We investigate the optimal allocation design for response adaptive clinical trials, under the average reward criterion. The treatment randomization process is formatted as a Markov decision process and the Bayesian method is used to summarize the information on treatment effects. A span-contraction operator is introduced and the average reward generated by the policy identified by the operator is shown to converge to the optimal value. We propose an algorithm to approximate the optimal treatment allocation using the Thompson sampling and the contraction operator. For the scenario of two treatments with binary responses and a sample size of 200 patients, simulation results demonstrate efficient learning features of the proposed method. It allocates a high proportion of patients to the better treatment while retaining a good statistical power and having a small probability for a trial going in the undesired direction. When the difference in success probability to detect is 0.2, the probability for a trial going in the unfavorable direction is < 1.5%, which decreases further to < 0.9% when the difference to detect is 0.3. For normally distribution responses, with a sample size of 100 patients, the proposed method assigns 13% more patients to the better treatment than the traditional complete randomization in detecting an effect size of difference 0.8, with a good statistical power and a < 0.7% probability for the trial to go in the undesired direction.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1724-1731"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}