Pub Date : 2025-11-27DOI: 10.1080/10543406.2025.2589731
Sasha Amdur Kravets, Ziji Yu, Rachael Liu, Jianchang Lin
The landscape of oncology drug development is transitioning from traditional cytotoxic chemotherapy drugs to novel agents, such as molecularly targeted therapies (MTA) or immunotherapies. Conventional dose optimization methods based on chemotherapy that assume a monotone dose-response relationship might not be ideal for the development of these novel therapies. Recognizing these limitations, the US FDA has introduced Project Optimus, an initiative aimed to reform the current paradigm of dose optimization. In addition to dose optimization, another critical objective for early phase proof-of-concept clinical trials is indication selection. However, there are limited methodologies that can address dose optimization and indication selection simultaneously. In this paper, we propose a Bayesian Dose Optimization Design for Randomized Phase II trials with Multiple Indications (M-DODII) that integrates Bayesian continuous monitoring and Bayesian pick-the-winner approach, utilizing efficacy and toxicity endpoints to inform dose selection for multiple indications simultaneously. Through simulation studies, we demonstrate that M-DODII has favorable operating characteristics with controlled selection error. Compared to other adaptive designs, M-DODII shows a lower probability of choosing a suboptimal dose, a higher probability of selecting the optimal dose, and reduced total sample size.
{"title":"M-DODII: Bayesian dose optimization design for randomized phase II study with multiple indications.","authors":"Sasha Amdur Kravets, Ziji Yu, Rachael Liu, Jianchang Lin","doi":"10.1080/10543406.2025.2589731","DOIUrl":"https://doi.org/10.1080/10543406.2025.2589731","url":null,"abstract":"<p><p>The landscape of oncology drug development is transitioning from traditional cytotoxic chemotherapy drugs to novel agents, such as molecularly targeted therapies (MTA) or immunotherapies. Conventional dose optimization methods based on chemotherapy that assume a monotone dose-response relationship might not be ideal for the development of these novel therapies. Recognizing these limitations, the US FDA has introduced Project Optimus, an initiative aimed to reform the current paradigm of dose optimization. In addition to dose optimization, another critical objective for early phase proof-of-concept clinical trials is indication selection. However, there are limited methodologies that can address dose optimization and indication selection simultaneously. In this paper, we propose a Bayesian Dose Optimization Design for Randomized Phase II trials with Multiple Indications (M-DODII) that integrates Bayesian continuous monitoring and Bayesian pick-the-winner approach, utilizing efficacy and toxicity endpoints to inform dose selection for multiple indications simultaneously. Through simulation studies, we demonstrate that M-DODII has favorable operating characteristics with controlled selection error. Compared to other adaptive designs, M-DODII shows a lower probability of choosing a suboptimal dose, a higher probability of selecting the optimal dose, and reduced total sample size.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1-14"},"PeriodicalIF":1.2,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145642993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1080/10543406.2025.2589734
Andrea Nizzardo, Luca Genetti, Marco Pergher
This work introduces the Burdened Bayesian Logistic Regression Model (BBLRM), an enhancement of the Bayesian Logistic Regression Model (BLRM) for dose-finding in phase I oncology trials. The BLRM determines the maximum tolerated dose (MTD) based on dose-limiting toxicities (DLTs). However, clinicians often perceive model-based designs like BLRM as complex and less conservative than rule-based designs, such as the widely used 3 + 3 method. To address these concerns, BBLRM incorporates non-DLT adverse events (nDLTAEs), which, although not severe enough to be DLTs, indicate potential toxicity risks at higher doses. BBLRM introduces an additional parameter δ to account for nDLTAEs, adjusting toxicity probability estimates to make dose escalation more conservative while maintaining accurate MTD allocation. This parameter, generated basing on the proportion of patients experiencing nDLTAEs, is tuned to balance conservatism with model performance, reducing the risk of selecting overly toxic doses. Additionally, involving clinicians in identifying nDLTAEs enhances their engagement in the dose-finding process. A simulation study compares BBLRM with two other BLRM methods and a two-stage Continual Reassessment Method (CRM) incorporating nDLTAEs. Results show that BBLRM reduces the proportion of toxic doses selected as MTD without compromising the accuracy in MTD identification. These findings suggest that integrating nDLTAEs can improve the safety and acceptance of model-based designs in phase I oncology trials.
{"title":"Enhancing dose selection in phase I cancer trials: Extending the Bayesian Logistic Regression Model with non-DLT adverse events integration.","authors":"Andrea Nizzardo, Luca Genetti, Marco Pergher","doi":"10.1080/10543406.2025.2589734","DOIUrl":"https://doi.org/10.1080/10543406.2025.2589734","url":null,"abstract":"<p><p>This work introduces the Burdened Bayesian Logistic Regression Model (BBLRM), an enhancement of the Bayesian Logistic Regression Model (BLRM) for dose-finding in phase I oncology trials. The BLRM determines the maximum tolerated dose (MTD) based on dose-limiting toxicities (DLTs). However, clinicians often perceive model-based designs like BLRM as complex and less conservative than rule-based designs, such as the widely used 3 + 3 method. To address these concerns, BBLRM incorporates non-DLT adverse events (nDLTAEs), which, although not severe enough to be DLTs, indicate potential toxicity risks at higher doses. BBLRM introduces an additional parameter δ to account for nDLTAEs, adjusting toxicity probability estimates to make dose escalation more conservative while maintaining accurate MTD allocation. This parameter, generated basing on the proportion of patients experiencing nDLTAEs, is tuned to balance conservatism with model performance, reducing the risk of selecting overly toxic doses. Additionally, involving clinicians in identifying nDLTAEs enhances their engagement in the dose-finding process. A simulation study compares BBLRM with two other BLRM methods and a two-stage Continual Reassessment Method (CRM) incorporating nDLTAEs. Results show that BBLRM reduces the proportion of toxic doses selected as MTD without compromising the accuracy in MTD identification. These findings suggest that integrating nDLTAEs can improve the safety and acceptance of model-based designs in phase I oncology trials.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1-21"},"PeriodicalIF":1.2,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1080/10543406.2025.2571224
Jianbo Xu
In oncology trials, patients in both the control and experimental arms can receive different subsequent anti-cancer therapies (SATs) after discontinuing their randomized study drugs, a phenomenon commonly referred to as treatment switching. SATs may have the potential to extend overall survival (OS) in patients treated with the control and experimental drugs. Without recovering the information from the SATs, the statistical power of the clinical trials could be drastically reduced, thus making it difficult or impossible to meet the efficacy objective. This article presents a novel statistical method for imputing the post-switching survival time multiple times to derive the point estimate of the true hazard ratio (HR) of OS between the experimental and control drugs and the associated 95% confidence interval (CI). The proposed method provides an effective solution for recovering lost information in the OS caused by SATs. It also offers an efficient way to evaluate the true causal treatment effect, potentially increasing the statistical power. Additionally, this method can be used for patients with a crossover from a placebo to an experimental treatment in placebo-controlled trials. Simulation studies demonstrated that the proposed method performed well and reliably, and applications to oncology trials using the simulated data are provided.
{"title":"Recovery of overall survival information from treatment switching in oncology trials using multiple imputation.","authors":"Jianbo Xu","doi":"10.1080/10543406.2025.2571224","DOIUrl":"https://doi.org/10.1080/10543406.2025.2571224","url":null,"abstract":"<p><p>In oncology trials, patients in both the control and experimental arms can receive different subsequent anti-cancer therapies (SATs) after discontinuing their randomized study drugs, a phenomenon commonly referred to as treatment switching. SATs may have the potential to extend overall survival (OS) in patients treated with the control and experimental drugs. Without recovering the information from the SATs, the statistical power of the clinical trials could be drastically reduced, thus making it difficult or impossible to meet the efficacy objective. This article presents a novel statistical method for imputing the post-switching survival time multiple times to derive the point estimate of the true hazard ratio (HR) of OS between the experimental and control drugs and the associated 95% confidence interval (CI). The proposed method provides an effective solution for recovering lost information in the OS caused by SATs. It also offers an efficient way to evaluate the true causal treatment effect, potentially increasing the statistical power. Additionally, this method can be used for patients with a crossover from a placebo to an experimental treatment in placebo-controlled trials. Simulation studies demonstrated that the proposed method performed well and reliably, and applications to oncology trials using the simulated data are provided.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1-26"},"PeriodicalIF":1.2,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1080/10543406.2025.2575939
Yu-Ting Weng, Dalong Huang
The assessment of human ether-a-go-go-related gene (hERG) safety assay is essential for estimating the risk that a drug will cause delayed repolarization and QT interval prolongation prior to human administration. Quantitative assessment of hERG safety assay similarity presents significant challenges due to the absence of consensus methodology and substantial inter-laboratory variability in hERG assay performance. We developed a statistical framework to conduct quantitative assessment of hERG safety assay similarity for drug products between sponsor's laboratories and laboratories that follow the ICH E14 S7b Q&A Best Practice recommended protocol. Our approach employs fixed margin equivalence testing methodology. Using real-world and/or simulated data, we demonstrate that the proposed equivalence testing methods successfully identify similar hERG assays between laboratories for 28 Comprehensive In Vitro Proarrhythmia Assay (CiPA) drugs. The testing results align with the domain experts' assessments, validating the framework's utility for regulatory decision-making.
{"title":"Statistical approaches to evaluate the positive control drug using the hERG assay.","authors":"Yu-Ting Weng, Dalong Huang","doi":"10.1080/10543406.2025.2575939","DOIUrl":"https://doi.org/10.1080/10543406.2025.2575939","url":null,"abstract":"<p><p>The assessment of human ether-a-go-go-related gene (hERG) safety assay is essential for estimating the risk that a drug will cause delayed repolarization and QT interval prolongation prior to human administration. Quantitative assessment of hERG safety assay similarity presents significant challenges due to the absence of consensus methodology and substantial inter-laboratory variability in hERG assay performance. We developed a statistical framework to conduct quantitative assessment of hERG safety assay similarity for drug products between sponsor's laboratories and laboratories that follow the ICH E14 S7b Q&A Best Practice recommended protocol. Our approach employs fixed margin equivalence testing methodology. Using real-world and/or simulated data, we demonstrate that the proposed equivalence testing methods successfully identify similar hERG assays between laboratories for 28 Comprehensive <i>In Vitro</i> Proarrhythmia Assay (CiPA) drugs. The testing results align with the domain experts' assessments, validating the framework's utility for regulatory decision-making.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1-17"},"PeriodicalIF":1.2,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145551786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bioequivalence studies play a pivotal role in drug development by establishing the clinical equivalence of two drug formulations. These studies often utilize crossover designs to facilitate within-subject treatment comparisons, optimizing statistical power with fewer subjects. However, uncertainty regarding the variance of a new drug or formulation during planning presents a challenge for sample size determination. While adaptive designs offer a potential solution, their application in crossover studies is less explored compared to group sequential designs, and many existing adaptive methods require data unblinding during the trial. Only two blinded sample size re-estimation approaches have been developed in crossover settings to date. In this paper, we propose a novel method for blinded within-subject variance estimation at interim analysis and re-estimate the sample size to achieve the desired power. We thoroughly investigate its analytical properties and introduce a refined, unbiased estimator. Through extensive simulation studies, our method shows comparable performance to existing blinded approaches and offers a distinct advantage in scenarios with small treatment differences and large subject variances.
{"title":"Blinded sample size re-estimation in a crossover study.","authors":"Shaofei Zhao, Balakrishna Hosmane, Chen Chen, Yi-Lin Chiu","doi":"10.1080/10543406.2025.2575947","DOIUrl":"https://doi.org/10.1080/10543406.2025.2575947","url":null,"abstract":"<p><p>Bioequivalence studies play a pivotal role in drug development by establishing the clinical equivalence of two drug formulations. These studies often utilize crossover designs to facilitate within-subject treatment comparisons, optimizing statistical power with fewer subjects. However, uncertainty regarding the variance of a new drug or formulation during planning presents a challenge for sample size determination. While adaptive designs offer a potential solution, their application in crossover studies is less explored compared to group sequential designs, and many existing adaptive methods require data unblinding during the trial. Only two blinded sample size re-estimation approaches have been developed in crossover settings to date. In this paper, we propose a novel method for blinded within-subject variance estimation at interim analysis and re-estimate the sample size to achieve the desired power. We thoroughly investigate its analytical properties and introduce a refined, unbiased estimator. Through extensive simulation studies, our method shows comparable performance to existing blinded approaches and offers a distinct advantage in scenarios with small treatment differences and large subject variances.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1-19"},"PeriodicalIF":1.2,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145551806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12DOI: 10.1080/10543406.2025.2575945
Yifang Zhang, Pinyuen Chen
Randomized subset selection procedures are important statistical tools in clinical trials involving multiple treatments. However, traditional methods lack built-in early stopping criteria, leading to potential inefficiencies and unnecessary patient exposure. Inspired by Gupta and Sobel's (1960) foundational subset selection approach and Bechhofer and Kulkarni's (1982) idea of curtailment, this paper introduces a curtailed subset selection procedure for binomial populations under a frequentist framework. Specifically, our method includes a mathematically driven stopping rule that terminates sampling as soon as non-leading treatments can no longer statistically surpass the current leader. We derive explicit formulas for calculating the probability of correct selection and the expected sample size, and we also introduce an optional randomization extension to precisely achieve pre-specified accuracy targets. Simulation studies confirm that the proposed curtailed procedure maintains comparable accuracy levels while substantially reducing expected sample sizes compared to existing procedures. Illustrative examples from clinical trial scenarios demonstrate the practical benefits and ease of implementation. This approach provides researchers and practitioners with an efficient, statistically rigorous tool for optimizing subset selection in biopharmaceutical research.
{"title":"Curtailed procedures for binomial random-sized subset selection.","authors":"Yifang Zhang, Pinyuen Chen","doi":"10.1080/10543406.2025.2575945","DOIUrl":"https://doi.org/10.1080/10543406.2025.2575945","url":null,"abstract":"<p><p>Randomized subset selection procedures are important statistical tools in clinical trials involving multiple treatments. However, traditional methods lack built-in early stopping criteria, leading to potential inefficiencies and unnecessary patient exposure. Inspired by Gupta and Sobel's (1960) foundational subset selection approach and Bechhofer and Kulkarni's (1982) idea of curtailment, this paper introduces a curtailed subset selection procedure for binomial populations under a frequentist framework. Specifically, our method includes a mathematically driven stopping rule that terminates sampling as soon as non-leading treatments can no longer statistically surpass the current leader. We derive explicit formulas for calculating the probability of correct selection and the expected sample size, and we also introduce an optional randomization extension to precisely achieve pre-specified accuracy targets. Simulation studies confirm that the proposed curtailed procedure maintains comparable accuracy levels while substantially reducing expected sample sizes compared to existing procedures. Illustrative examples from clinical trial scenarios demonstrate the practical benefits and ease of implementation. This approach provides researchers and practitioners with an efficient, statistically rigorous tool for optimizing subset selection in biopharmaceutical research.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1-28"},"PeriodicalIF":1.2,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145508132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12DOI: 10.1080/10543406.2025.2575940
Se Yoon Lee
Recently, hybrid designs have garnered significant attention in the healthcare industry due to their potential to improve statistical power and trial efficiency by augmenting randomized controlled trial data with external controls. The power prior methodology provides a versatile framework for constructing and analyzing data from hybrid designs. However, the use of external control data poses a risk of introducing bias, particularly in the presence of prior-data conflict, which can distort treatment effect estimates. Such biases may lead to erroneous conclusions, including the approval of ineffective treatments or the rejection of beneficial ones. To address these concerns, it is essential to borrow an appropriate amount of external data to maintain the type I error rate at an acceptable level, typically determined during trial planning in discussion with regulatory authorities. In this article, we present a novel power prior method to incorporate historical control data while safeguarding against inflation of the type I error rate beyond the maximally allowable nominal level. Through comprehensive simulation studies and an illustrative example, we demonstrate the practical advantages of our approach. The results illustrate that our method provides trial sponsors with a scientifically rigorous strategy for leveraging external control data in constructing efficient and reliable hybrid designs.
{"title":"Power priors and type I error control: constrained borrowing of external control data.","authors":"Se Yoon Lee","doi":"10.1080/10543406.2025.2575940","DOIUrl":"https://doi.org/10.1080/10543406.2025.2575940","url":null,"abstract":"<p><p>Recently, hybrid designs have garnered significant attention in the healthcare industry due to their potential to improve statistical power and trial efficiency by augmenting randomized controlled trial data with external controls. The power prior methodology provides a versatile framework for constructing and analyzing data from hybrid designs. However, the use of external control data poses a risk of introducing bias, particularly in the presence of prior-data conflict, which can distort treatment effect estimates. Such biases may lead to erroneous conclusions, including the approval of ineffective treatments or the rejection of beneficial ones. To address these concerns, it is essential to borrow an appropriate amount of external data to maintain the type I error rate at an acceptable level, typically determined during trial planning in discussion with regulatory authorities. In this article, we present a novel power prior method to incorporate historical control data while safeguarding against inflation of the type I error rate beyond the maximally allowable nominal level. Through comprehensive simulation studies and an illustrative example, we demonstrate the practical advantages of our approach. The results illustrate that our method provides trial sponsors with a scientifically rigorous strategy for leveraging external control data in constructing efficient and reliable hybrid designs.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1-23"},"PeriodicalIF":1.2,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145508085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1080/10543406.2025.2575942
Ning Li, Yaohua Zhang, Naitee Ting
In pharmaceutical industry, a prevalent yet problematic phenomenon is that piles of statistical tables, listings and figures (abbreviation TLFs) are prepared and included in a clinical trial study report (CSR). While some TLFs convey critical insights and others provide essential context to help reviewers understand the various properties of the drug, many other TLFs are redundant and serve no meaningful purpose. The overabundance of unnecessary TLFs in the CSR body or appendix can have several detrimental effects: it may confuse or mislead reviewers, obscure the key messages and waste valuable resources. This paper aims to shed light on this pervasive issue, highlight its potential adverse consequences and explore the underlying reasons. We will present two case examples to illustrate our points and offer practical solutions and recommendations. Finally, we will conclude this paper with a remark.
{"title":"Tables, listings and figures in a clinical study report - quality or quantity?","authors":"Ning Li, Yaohua Zhang, Naitee Ting","doi":"10.1080/10543406.2025.2575942","DOIUrl":"https://doi.org/10.1080/10543406.2025.2575942","url":null,"abstract":"<p><p>In pharmaceutical industry, a prevalent yet problematic phenomenon is that piles of statistical tables, listings and figures (abbreviation TLFs) are prepared and included in a clinical trial study report (CSR). While some TLFs convey critical insights and others provide essential context to help reviewers understand the various properties of the drug, many other TLFs are redundant and serve no meaningful purpose. The overabundance of unnecessary TLFs in the CSR body or appendix can have several detrimental effects: it may confuse or mislead reviewers, obscure the key messages and waste valuable resources. This paper aims to shed light on this pervasive issue, highlight its potential adverse consequences and explore the underlying reasons. We will present two case examples to illustrate our points and offer practical solutions and recommendations. Finally, we will conclude this paper with a remark.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1-13"},"PeriodicalIF":1.2,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145490725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-08DOI: 10.1080/10543406.2025.2575941
Esin Avci
This study presents a comprehensive evaluation of Bayesian meta-analysis methods for estimating odds ratios (ORs), with a focus on the impact of heterogeneity and prior distribution choices under varying conditions. Recognizing the limitations of frequentist approaches, especially in small-sample or rare-event scenarios, we implemented a Bayesian framework utilizing four different priors for heterogeneity: half-normal, exponential, half-Cauchy, and inverse-gamma. Simulation studies were conducted across 1,152 scenarios, varying the number of studies, event rarity, randomization ratios, and baseline risks. Results indicate that prior specification and study size substantially influence estimation accuracy, particularly for rare events. To further explore these interactions, CHAID (Chi-square Automatic Interaction Detection) analysis, which effectively identified key factors affecting model performance, is implemented. CHAID revealed that the number of studies included in the meta-analysis (NSMA) is the most significant determinant of estimation reliability, while other variables such as event type and randomization ratio exert notable influence under specific conditions. CHAID also facilitated the categorization of OR estimation quality and heterogeneity levels, offering a powerful visual and interpretive aid. Overall, this study underscores the importance of prior selection in Bayesian meta-analysis and highlights CHAID analysis as a valuable complementary tool for uncovering complex interactions and enhancing result interpretability.
{"title":"The performance of odds ratio estimation under different scenarios in Bayesian meta-analysis: A simulation study.","authors":"Esin Avci","doi":"10.1080/10543406.2025.2575941","DOIUrl":"https://doi.org/10.1080/10543406.2025.2575941","url":null,"abstract":"<p><p>This study presents a comprehensive evaluation of Bayesian meta-analysis methods for estimating odds ratios (ORs), with a focus on the impact of heterogeneity and prior distribution choices under varying conditions. Recognizing the limitations of frequentist approaches, especially in small-sample or rare-event scenarios, we implemented a Bayesian framework utilizing four different priors for heterogeneity: half-normal, exponential, half-Cauchy, and inverse-gamma. Simulation studies were conducted across 1,152 scenarios, varying the number of studies, event rarity, randomization ratios, and baseline risks. Results indicate that prior specification and study size substantially influence estimation accuracy, particularly for rare events. To further explore these interactions, CHAID (Chi-square Automatic Interaction Detection) analysis, which effectively identified key factors affecting model performance, is implemented. CHAID revealed that the number of studies included in the meta-analysis (NSMA) is the most significant determinant of estimation reliability, while other variables such as event type and randomization ratio exert notable influence under specific conditions. CHAID also facilitated the categorization of OR estimation quality and heterogeneity levels, offering a powerful visual and interpretive aid. Overall, this study underscores the importance of prior selection in Bayesian meta-analysis and highlights CHAID analysis as a valuable complementary tool for uncovering complex interactions and enhancing result interpretability.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1-27"},"PeriodicalIF":1.2,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145472375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-21DOI: 10.1080/10543406.2025.2547585
Yeongjin Gwon, Ming-Hui Chen, May Mo, Xun Jiang, H Amy Xia, Joseph G Ibrahim
Comparing emerging treatment options is often challenging because of the sparseness of direct comparisons from head-to-head trials and inconsistencies in outcome measures among published placebo-controlled trials for each treatment. One potential solution is to aggregate the different outcome measures into a single ordinal response variable for consistent evaluation. The ordinal response variable will inevitably contain unknown response categories because they cannot be directly derived from published data in the literature. In this paper, we propose a statistical methodology to overcome such a common but unresolved issue in the context of network meta-regression for aggregate ordinal outcomes. Specifically, we introduce unobserved latent counts and model these counts within a Bayesian framework. The proposed approach includes several existing models as special cases and also allows us to conduct a proper statistical analysis in the presence of trials with certain missing categories. We then develop an efficient Markov chain Monte Carlo sampling algorithm to carry out Bayesian computation. Variations of the deviance information criterion and widely applicable information criterion are used for the assessment of goodness-of-fit under different distributions of the latent counts. A case study demonstrating the usefulness of the proposed methodology is conducted using aggregate ordinal outcome data from 18 clinical trials in treating Crohn's Disease.
{"title":"Bayesian network meta-regression for aggregate ordinal outcomes with imprecise categories.","authors":"Yeongjin Gwon, Ming-Hui Chen, May Mo, Xun Jiang, H Amy Xia, Joseph G Ibrahim","doi":"10.1080/10543406.2025.2547585","DOIUrl":"https://doi.org/10.1080/10543406.2025.2547585","url":null,"abstract":"<p><p>Comparing emerging treatment options is often challenging because of the sparseness of direct comparisons from head-to-head trials and inconsistencies in outcome measures among published placebo-controlled trials for each treatment. One potential solution is to aggregate the different outcome measures into a single ordinal response variable for consistent evaluation. The ordinal response variable will inevitably contain unknown response categories because they cannot be directly derived from published data in the literature. In this paper, we propose a statistical methodology to overcome such a common but unresolved issue in the context of network meta-regression for aggregate ordinal outcomes. Specifically, we introduce unobserved latent counts and model these counts within a Bayesian framework. The proposed approach includes several existing models as special cases and also allows us to conduct a proper statistical analysis in the presence of trials with certain missing categories. We then develop an efficient Markov chain Monte Carlo sampling algorithm to carry out Bayesian computation. Variations of the deviance information criterion and widely applicable information criterion are used for the assessment of goodness-of-fit under different distributions of the latent counts. A case study demonstrating the usefulness of the proposed methodology is conducted using aggregate ordinal outcome data from 18 clinical trials in treating Crohn's Disease.</p>","PeriodicalId":54870,"journal":{"name":"Journal of Biopharmaceutical Statistics","volume":" ","pages":"1-22"},"PeriodicalIF":1.2,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145338245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}