Pub Date : 2025-10-01Epub Date: 2025-04-28DOI: 10.1080/10543406.2025.2489285
Matthew A Psioda, Nathan W Bean, Brielle A Wright, Yuelin Lu, Alejandro Mantero, Antara Majumdar
We propose an approach for constructing and evaluating the performance of inverse probability weighted robust mixture priors (IPW-RMP) which are applied to the parameters in treatment group-specific marginal models. Our framework allows practitioners to systematically study the robustness of Bayesian dynamic borrowing using the IPW-RMP to enhance the efficiency of inferences on marginal treatment effects (e.g. marginal risk difference) in a target study being planned. A key assumption motivating our work is that the data generation processes for the target study and external data source (e.g. historical study) will not be the same, likely having different distributions for key prognostic factors and possibly different outcome distributions even for individuals who have identical prognostic factors (e.g. different outcome model parameters). We demonstrate the approach using simulation studies based on both binary and time-to-event outcomes, and via a case study based on actual clinical trial data for a solid tumor cancer program. Our simulation results show that when the distribution of risk factors does in fact differ, the IPW-RMP provides improved performance compared to a standard RMP (e.g. increased power and reduced bias of the posterior mean point estimator) with essentially no loss of performance when the risk factor distributions do not differ. Thus, the IPW-RMP can safely be used in any situation where a standard RMP is appropriate.
{"title":"Inverse probability weighted Bayesian dynamic borrowing for estimation of marginal treatment effects with application to hybrid control arm oncology studies.","authors":"Matthew A Psioda, Nathan W Bean, Brielle A Wright, Yuelin Lu, Alejandro Mantero, Antara Majumdar","doi":"10.1080/10543406.2025.2489285","DOIUrl":"10.1080/10543406.2025.2489285","url":null,"abstract":"<p><p>We propose an approach for constructing and evaluating the performance of inverse probability weighted robust mixture priors (IPW-RMP) which are applied to the parameters in treatment group-specific marginal models. Our framework allows practitioners to systematically study the robustness of Bayesian dynamic borrowing using the IPW-RMP to enhance the efficiency of inferences on marginal treatment effects (e.g. marginal risk difference) in a target study being planned. A key assumption motivating our work is that the data generation processes for the target study and external data source (e.g. historical study) will not be the same, likely having different distributions for key prognostic factors and possibly different outcome distributions even for individuals who have identical prognostic factors (e.g. different outcome model parameters). We demonstrate the approach using simulation studies based on both binary and time-to-event outcomes, and via a case study based on actual clinical trial data for a solid tumor cancer program. Our simulation results show that when the distribution of risk factors does in fact differ, the IPW-RMP provides improved performance compared to a standard RMP (e.g. increased power and reduced bias of the posterior mean point estimator) with essentially no loss of performance when the risk factor distributions do not differ. Thus, the IPW-RMP can safely be used in any situation where a standard RMP is appropriate.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1083-1105"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-04-28DOI: 10.1080/10543406.2025.2489283
Lei Shi, Herbert Pang, Chen Chen, Jiawen Zhu
Randomized controlled trials (RCTs) are considered the gold standard for treatment effect evaluation in clinical development. However, designing and analyzing RCTs poses many challenges such as how to ensure the validity and improve the power for hypothesis testing with a limited sample size or how to account for a crossover in treatment allocation. One promising approach to circumvent these problems is to incorporate external controls from additional data sources. This manuscript introduces a new R package called rdborrow, which implements several external control borrowing methods under a causal inference framework to facilitate the design and analysis of clinical trials with longitudinal outcomes. More concretely, our package provides an Analysis module, which implements the weighting methods proposed in Zhou et al. (2024), as well as the difference-in-differences and synthetic control methods proposed in Zhou et al. (2024) for external control borrowing. Meanwhile, our package features a Simulation module which can be used to simulate trial data for study design implementation, evaluate the performance of different estimators, and conduct power analysis. In reproducible code examples, we generate simulated data sets mimicking the real data and illustrate the process users can follow to conduct simulation and analysis based on the proposed causal inference methods for randomized controlled trial data incorporating external control data.
随机对照试验(RCTs)被认为是临床开发中评价治疗效果的金标准。然而,设计和分析随机对照试验提出了许多挑战,例如如何在有限的样本量下确保有效性并提高假设检验的能力,或者如何解释治疗分配中的交叉。规避这些问题的一个有希望的方法是合并来自其他数据源的外部控制。本文介绍了一个名为rdborrow的新R软件包,它在因果推理框架下实现了几种外部对照借用方法,以促进具有纵向结果的临床试验的设计和分析。更具体地说,我们的软件包提供了一个Analysis模块,该模块实现了Zhou et al.(2024)提出的加权方法,以及Zhou et al.(2024)提出的差分中的差分和综合控制方法,用于外部控制借用。同时,我们的软件包具有仿真模块,可用于模拟研究设计实施的试验数据,评估不同估计器的性能,并进行功率分析。在可重复的代码示例中,我们生成了模拟真实数据的模拟数据集,并说明了用户可以遵循的过程,该过程基于所提出的包含外部控制数据的随机对照试验数据的因果推理方法进行模拟和分析。
{"title":"rdborrow: an R package for causal inference incorporating external controls in randomized controlled trials with longitudinal outcomes.","authors":"Lei Shi, Herbert Pang, Chen Chen, Jiawen Zhu","doi":"10.1080/10543406.2025.2489283","DOIUrl":"10.1080/10543406.2025.2489283","url":null,"abstract":"<p><p>Randomized controlled trials (RCTs) are considered the gold standard for treatment effect evaluation in clinical development. However, designing and analyzing RCTs poses many challenges such as how to ensure the validity and improve the power for hypothesis testing with a limited sample size or how to account for a crossover in treatment allocation. One promising approach to circumvent these problems is to incorporate external controls from additional data sources. This manuscript introduces a new R package called <b>rdborrow</b>, which implements several external control borrowing methods under a causal inference framework to facilitate the design and analysis of clinical trials with longitudinal outcomes. More concretely, our package provides an Analysis module, which implements the weighting methods proposed in Zhou et al. (2024), as well as the difference-in-differences and synthetic control methods proposed in Zhou et al. (2024) for external control borrowing. Meanwhile, our package features a Simulation module which can be used to simulate trial data for study design implementation, evaluate the performance of different estimators, and conduct power analysis. In reproducible code examples, we generate simulated data sets mimicking the real data and illustrate the process users can follow to conduct simulation and analysis based on the proposed causal inference methods for randomized controlled trial data incorporating external control data.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1043-1066"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-04-20DOI: 10.1080/10543406.2025.2489291
Kaiyuan Hua, Hwanhee Hong, Xiaofei Wang
Biomarker-guided designs are increasingly used to evaluate personalized treatments based on patients' biomarker status in Phase II and III clinical trials. With adaptive enrichment, these designs can improve the efficiency of evaluating the treatment effect in biomarker-positive patients by increasing their proportion in the randomized trial. While time-to-event outcomes are often used as the primary endpoint to measure treatment effects for a new therapy in severe diseases like cancer and cardiovascular diseases, there is limited research on biomarker-guided adaptive enrichment trials in this context. Such trials almost always adopt hazard ratio methods for statistical measurement of treatment effects. In contrast, restricted mean survival time (RMST) has gained popularity for analyzing time-to-event outcomes because it offers more straightforward interpretations of treatment effects and does not require the proportional hazard assumption. This paper proposes a two-stage biomarker-guided adaptive RMST design with threshold detection and patient enrichment. We develop sophisticated methods for identifying the optimal biomarker threshold and biomarker-positive subgroup, treatment effect estimators, and approaches for type I error rate, power analysis, and sample size calculation. We present a numerical example of re-designing an oncology trial. An extensive simulation study is conducted to evaluate the performance of the proposed design.
{"title":"Biomarker-guided adaptive enrichment design with threshold detection for clinical trials with time-to-event outcome.","authors":"Kaiyuan Hua, Hwanhee Hong, Xiaofei Wang","doi":"10.1080/10543406.2025.2489291","DOIUrl":"10.1080/10543406.2025.2489291","url":null,"abstract":"<p><p>Biomarker-guided designs are increasingly used to evaluate personalized treatments based on patients' biomarker status in Phase II and III clinical trials. With adaptive enrichment, these designs can improve the efficiency of evaluating the treatment effect in biomarker-positive patients by increasing their proportion in the randomized trial. While time-to-event outcomes are often used as the primary endpoint to measure treatment effects for a new therapy in severe diseases like cancer and cardiovascular diseases, there is limited research on biomarker-guided adaptive enrichment trials in this context. Such trials almost always adopt hazard ratio methods for statistical measurement of treatment effects. In contrast, restricted mean survival time (RMST) has gained popularity for analyzing time-to-event outcomes because it offers more straightforward interpretations of treatment effects and does not require the proportional hazard assumption. This paper proposes a two-stage biomarker-guided adaptive RMST design with threshold detection and patient enrichment. We develop sophisticated methods for identifying the optimal biomarker threshold and biomarker-positive subgroup, treatment effect estimators, and approaches for type I error rate, power analysis, and sample size calculation. We present a numerical example of re-designing an oncology trial. An extensive simulation study is conducted to evaluate the performance of the proposed design.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1209-1226"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12353384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-04-29DOI: 10.1080/10543406.2025.2489286
Yuqing Liu, Wendy Lou, Shein-Chung Chow
Biosimilars play a crucial role in increasing the accessibility and affordability of biological therapies; thus, precise and reliable assessment methods are essential for their regulatory approval and clinical adoption. Currently, the 2-sequence 2-period crossover design is recommended for two-treatment biosimilar studies. However, such designs may be inadequate for the practical assessment when multiple test or reference products are involved, particularly in scenarios such as: (1) bridging biosimilar results across regulatory regions (e.g. the European Union, Canada, and United States), or (2) evaluating biosimilarity across different dosage forms or routes of administration. To address these challenges, multi-treatment designs such as Latin-square design, Williams design, and balanced incomplete block design can be considered. More recently, the complete N-of-1 trial design, which contains all permutations of treatments with replacement, has gained attention in biosimilar drug development, especially with the presence of carryover effects. However, detailed statistical methodologies and comprehensive performance comparisons of these designs are lacking in the context of multi-formulation studies. This study employs a linear mixed-effects model to estimate the contrast of treatment effects across three drug products within the framework of the designs under investigation. Subsequently, the relationship between sample size and relative efficiency is explored under same significance level and statistical power. The findings indicate that, for a given sample size, the complete N-of-1 design consistently achieves the lowest estimation variance relative to the alternative designs, thereby representing a more efficient design for biosimilar assessment under the conditions examined.
{"title":"Application of complete N-of-1 trial design in bioequivalence-biosimilar drug development.","authors":"Yuqing Liu, Wendy Lou, Shein-Chung Chow","doi":"10.1080/10543406.2025.2489286","DOIUrl":"10.1080/10543406.2025.2489286","url":null,"abstract":"<p><p>Biosimilars play a crucial role in increasing the accessibility and affordability of biological therapies; thus, precise and reliable assessment methods are essential for their regulatory approval and clinical adoption. Currently, the 2-sequence 2-period crossover design is recommended for two-treatment biosimilar studies. However, such designs may be inadequate for the practical assessment when multiple test or reference products are involved, particularly in scenarios such as: (1) bridging biosimilar results across regulatory regions (e.g. the European Union, Canada, and United States), or (2) evaluating biosimilarity across different dosage forms or routes of administration. To address these challenges, multi-treatment designs such as Latin-square design, Williams design, and balanced incomplete block design can be considered. More recently, the complete N-of-1 trial design, which contains all permutations of treatments with replacement, has gained attention in biosimilar drug development, especially with the presence of carryover effects. However, detailed statistical methodologies and comprehensive performance comparisons of these designs are lacking in the context of multi-formulation studies. This study employs a linear mixed-effects model to estimate the contrast of treatment effects across three drug products within the framework of the designs under investigation. Subsequently, the relationship between sample size and relative efficiency is explored under same significance level and statistical power. The findings indicate that, for a given sample size, the complete N-of-1 design consistently achieves the lowest estimation variance relative to the alternative designs, thereby representing a more efficient design for biosimilar assessment under the conditions examined.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1106-1125"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144050849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-04-26DOI: 10.1080/10543406.2025.2489288
Jing Zhai, Fraser Smith, Guoxing Soon
Selecting the primary endpoint has been one of the most challenging tasks in the design of clinical trials. Typical endpoints include binary, continuous or time-to-event endpoints. The primary endpoint for many clinical trials is binary and is defined based on a threshold of a continuous endpoint. Many such trials could lack study power. It could be challenging to decide the appropriate threshold to define the binary endpoints; the best guess could be wrong, and the study will lose its power when that happens. For this reason, we propose to use an ordinal endpoint defined by two or more cut points as a primary or secondary efficacy endpoint when facing such challenges, to spread the risk from comparing treatment differences at a single cut point to multiple cut points. This way the study could maintain its power even if the results differ from the initial expectations. In this paper, we evaluate the performance of continuous, binary, and ordinal endpoints via extensive simulation studies. Furthermore, we compare the three types of endpoints across many clinical trials. Overall, we demonstrate that there may be some situations where the use of ordinal categorical endpoints, based on clinical and statistical considerations, could offer advantages as a primary or secondary efficacy endpoint.Disclaimer: This article has been reviewed by FDA and determined not to be consistent with the Agency's views or policies. It reflects only the views and opinions of the authors.
{"title":"Comparison of continuous, binary, and ordinal endpoints.","authors":"Jing Zhai, Fraser Smith, Guoxing Soon","doi":"10.1080/10543406.2025.2489288","DOIUrl":"10.1080/10543406.2025.2489288","url":null,"abstract":"<p><p>Selecting the primary endpoint has been one of the most challenging tasks in the design of clinical trials. Typical endpoints include binary, continuous or time-to-event endpoints. The primary endpoint for many clinical trials is binary and is defined based on a threshold of a continuous endpoint. Many such trials could lack study power. It could be challenging to decide the appropriate threshold to define the binary endpoints; the best guess could be wrong, and the study will lose its power when that happens. For this reason, we propose to use an ordinal endpoint defined by two or more cut points as a primary or secondary efficacy endpoint when facing such challenges, to spread the risk from comparing treatment differences at a single cut point to multiple cut points. This way the study could maintain its power even if the results differ from the initial expectations. In this paper, we evaluate the performance of continuous, binary, and ordinal endpoints via extensive simulation studies. Furthermore, we compare the three types of endpoints across many clinical trials. Overall, we demonstrate that there may be some situations where the use of ordinal categorical endpoints, based on clinical and statistical considerations, could offer advantages as a primary or secondary efficacy endpoint.Disclaimer: This article has been reviewed by FDA and determined not to be consistent with the Agency's views or policies. It reflects only the views and opinions of the authors.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1143-1160"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144006438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-04-20DOI: 10.1080/10543406.2025.2489294
Hong Zhang, Jie Pu, Shibing Deng, Satrajit Roychoudhury, Haitao Chu, Douglas Robinson
In the era of precision medicine, more and more clinical trials are now driven or guided by biomarkers, which are patient characteristics objectively measured and evaluated as indicators of normal biological processes, pathogenic processes, or pharmacologic responses to therapeutic interventions. With the overarching objective to optimize and personalize disease management, biomarker-guided clinical trials increase the efficiency by appropriately utilizing prognostic or predictive biomarkers in the design. However, the efficiency gain is often not quantitatively compared to the traditional all-comers design, in which a faster enrollment rate is expected (e.g. due to no restriction to biomarker positive patients) potentially leading to a shorter duration. To accurately predict biomarker-guided trial duration, we propose a general framework using mixture distributions accounting for heterogeneous population. Extensive simulations are performed to evaluate the impact of heterogeneous population and the dynamics of biomarker characteristics and disease on the study duration. Several influential parameters including median survival time, enrollment rate, biomarker prevalence and effect size are identified. Re-assessments of two publicly available trials are conducted to empirically validate the prediction accuracy and to demonstrate the practical utility.
{"title":"Study duration prediction for clinical trials with time-to-event endpoints accounting for heterogeneous population.","authors":"Hong Zhang, Jie Pu, Shibing Deng, Satrajit Roychoudhury, Haitao Chu, Douglas Robinson","doi":"10.1080/10543406.2025.2489294","DOIUrl":"10.1080/10543406.2025.2489294","url":null,"abstract":"<p><p>In the era of precision medicine, more and more clinical trials are now driven or guided by biomarkers, which are patient characteristics objectively measured and evaluated as indicators of normal biological processes, pathogenic processes, or pharmacologic responses to therapeutic interventions. With the overarching objective to optimize and personalize disease management, biomarker-guided clinical trials increase the efficiency by appropriately utilizing prognostic or predictive biomarkers in the design. However, the efficiency gain is often not quantitatively compared to the traditional all-comers design, in which a faster enrollment rate is expected (e.g. due to no restriction to biomarker positive patients) potentially leading to a shorter duration. To accurately predict biomarker-guided trial duration, we propose a general framework using mixture distributions accounting for heterogeneous population. Extensive simulations are performed to evaluate the impact of heterogeneous population and the dynamics of biomarker characteristics and disease on the study duration. Several influential parameters including median survival time, enrollment rate, biomarker prevalence and effect size are identified. Re-assessments of two publicly available trials are conducted to empirically validate the prediction accuracy and to demonstrate the practical utility.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1255-1270"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-04-27DOI: 10.1080/10543406.2025.2489293
Caroline A Falvey, Jamie L Todd, Megan L Neely
Identifying clinical or biological risk factors for disease plays a critical role in enabling earlier disease diagnosis, prognostic outcomes assessment, and may inform disease prevention or monitoring practices. One framework commonly examined is understanding the association between a risk factor ever occurring in follow-up and the future risk of an outcome. If such an association is found, researchers are often asked to validate the finding. External validation is often infeasible, and validation may only be performed internally. However, the performance of internal validation methods in the setting of a time-dependent binary indicator and a time-to-event outcome has not been well-studied. We emulated a dataset motivated by real-world serial biomarker observations and performed extensive simulation studies to evaluate the performance of a resampling-based method to internally validate the association between a time-dependent binary indicator and a time-to-event outcome. We found the resampling-based method achieved optimal power for validating such an association while maintaining good Type I error control.
{"title":"Evaluating the performance of a resampling approach for internally validating the association between a time-dependent binary indicator and time-to-event outcome.","authors":"Caroline A Falvey, Jamie L Todd, Megan L Neely","doi":"10.1080/10543406.2025.2489293","DOIUrl":"10.1080/10543406.2025.2489293","url":null,"abstract":"<p><p>Identifying clinical or biological risk factors for disease plays a critical role in enabling earlier disease diagnosis, prognostic outcomes assessment, and may inform disease prevention or monitoring practices. One framework commonly examined is understanding the association between a risk factor ever occurring in follow-up and the future risk of an outcome. If such an association is found, researchers are often asked to validate the finding. External validation is often infeasible, and validation may only be performed internally. However, the performance of internal validation methods in the setting of a time-dependent binary indicator and a time-to-event outcome has not been well-studied. We emulated a dataset motivated by real-world serial biomarker observations and performed extensive simulation studies to evaluate the performance of a resampling-based method to internally validate the association between a time-dependent binary indicator and a time-to-event outcome. We found the resampling-based method achieved optimal power for validating such an association while maintaining good Type I error control.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1244-1254"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-04-11DOI: 10.1080/10543406.2025.2489279
Shein-Chung Chow, Anne Pariser, Steven Galson
Recently, the use of alternative and confirmatory data in support of rare disease drug development has received much attention (NASEM 2024). This article attempts to provide an overview regarding the limitations and major challenges of the use of ACD that are commonly encountered in rare disease drug (including biologics) product development. In addition, some innovative approaches using ACD under a novel two-stage hybrid adaptive trial design are proposed to assist the sponsors in rare disease drug development are proposed. Under the proposed hybrid adaptive trial design, statistical considerations regarding the implementation of ACD in support of the demonstration of the safety and efficacy in rare disease drug development are discussed.
{"title":"Use of alternative and confirmatory data in support of rare disease drug development.","authors":"Shein-Chung Chow, Anne Pariser, Steven Galson","doi":"10.1080/10543406.2025.2489279","DOIUrl":"10.1080/10543406.2025.2489279","url":null,"abstract":"<p><p>Recently, the use of alternative and confirmatory data in support of rare disease drug development has received much attention (NASEM 2024). This article attempts to provide an overview regarding the limitations and major challenges of the use of ACD that are commonly encountered in rare disease drug (including biologics) product development. In addition, some innovative approaches using ACD under a novel two-stage hybrid adaptive trial design are proposed to assist the sponsors in rare disease drug development are proposed. Under the proposed hybrid adaptive trial design, statistical considerations regarding the implementation of ACD in support of the demonstration of the safety and efficacy in rare disease drug development are discussed.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1005-1019"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There has been growing interest in incorporating historical data to improve the efficiency of randomized controlled trials (RCTs) or reduce their required sample size. A key challenge is that the patient characteristics of the historical data may differ from those of the current RCT. To address this issue, a well-known approach is to employ propensity score matching or inverse probability weighting to adjust for baseline heterogeneity, enabling the incorporation of historical data into the inference of RCT. However, this approach is subject to bias when there are unmeasured confounders. We address this issue by incorporating a self-adapting mixture (SAM) prior with propensity score matching and inverse probability weighting to enable additional adaptation for information borrowing in the presence of unmeasured confounders. The resulting propensity score-integrated SAM (PS-SAM) priors are robust in the sense that if there are no unmeasured confounders, they result in an unbiased causal estimate of the treatment effect; and if there are unmeasured confounders, they provide a notably less biased treatment effect with better-controlled type I error. Simulation studies demonstrate that the PS-SAM prior exhibits desirable operating characteristics enabling adaptive information borrowing. The proposed methodology is freely available as the R package "SAMprior".
{"title":"PS-SAM: propensity-score-integrated self-adapting mixture prior to dynamically and efficiently borrow information from historical data.","authors":"Yuansong Zhao, Peng Yang, Glen Laird, Josh Chen, Ying Yuan","doi":"10.1080/10543406.2025.2489284","DOIUrl":"10.1080/10543406.2025.2489284","url":null,"abstract":"<p><p>There has been growing interest in incorporating historical data to improve the efficiency of randomized controlled trials (RCTs) or reduce their required sample size. A key challenge is that the patient characteristics of the historical data may differ from those of the current RCT. To address this issue, a well-known approach is to employ propensity score matching or inverse probability weighting to adjust for baseline heterogeneity, enabling the incorporation of historical data into the inference of RCT. However, this approach is subject to bias when there are unmeasured confounders. We address this issue by incorporating a self-adapting mixture (SAM) prior with propensity score matching and inverse probability weighting to enable additional adaptation for information borrowing in the presence of unmeasured confounders. The resulting propensity score-integrated SAM (PS-SAM) priors are robust in the sense that if there are no unmeasured confounders, they result in an unbiased causal estimate of the treatment effect; and if there are unmeasured confounders, they provide a notably less biased treatment effect with better-controlled type I error. Simulation studies demonstrate that the PS-SAM prior exhibits desirable operating characteristics enabling adaptive information borrowing. The proposed methodology is freely available as the R package \"SAMprior\".</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1067-1082"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12353383/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144027311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-19DOI: 10.1080/10543406.2025.2557573
Xiaofeng Liu, Ayyub Sheikhi
The conventional Cox proportional hazards model is designed to measure the influence of factors on the timing of an event and focuses more on relative risk rather than absolute risk. In the presence of multiple time-to-event variables, this study introduces a copula-based extension of the standard Cox model, which facilitates the dependence structure between variables. We employ vine copulas to effectively model the potentially non-linear relationships between failure times. Through conducting simulation studies, we show that our new algorithm greatly improves the accuracy of predicting failure times compared to other existing methodologies. Our findings are applied to predict mortality timing in real medical data.
{"title":"On improving the accuracy of prediction in Cox models for failure times using copulas.","authors":"Xiaofeng Liu, Ayyub Sheikhi","doi":"10.1080/10543406.2025.2557573","DOIUrl":"https://doi.org/10.1080/10543406.2025.2557573","url":null,"abstract":"<p><p>The conventional Cox proportional hazards model is designed to measure the influence of factors on the timing of an event and focuses more on relative risk rather than absolute risk. In the presence of multiple time-to-event variables, this study introduces a copula-based extension of the standard Cox model, which facilitates the dependence structure between variables. We employ vine copulas to effectively model the potentially non-linear relationships between failure times. Through conducting simulation studies, we show that our new algorithm greatly improves the accuracy of predicting failure times compared to other existing methodologies. Our findings are applied to predict mortality timing in real medical data.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1-14"},"PeriodicalIF":1.2,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145088382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}