Elli Makariadou, Xuechen Wang, Nicholas Hein, Negera W Deresa, Kathy Mutambanengwe, Bie Verbist, Olivier Thas
Combination treatments have been of increasing importance in drug development across therapeutic areas to improve treatment response, minimize the development of resistance, and/or minimize adverse events. Pre-clinical in-vitro combination experiments aim to explore the potential of such drug combinations during drug discovery by comparing the observed effect of the combination with the expected treatment effect under the assumption of no interaction (i.e., null model). This tutorial will address important design aspects of such experiments to allow proper statistical evaluation. Additionally, it will highlight the Biochemically Intuitive Generalized Loewe methodology (BIGL R package available on CRAN) to statistically detect deviations from the expectation under different null models. A clear advantage of the methodology is the quantification of the effect sizes, together with confidence interval while controlling the directional false coverage rate. Finally, a case study will showcase the workflow in analyzing combination experiments.
在各治疗领域的药物研发中,联合疗法的重要性与日俱增,它可以改善治疗反应,最大限度地减少耐药性的产生,和/或最大限度地减少不良反应。临床前体外联合实验旨在通过比较联合治疗的观察效果和无相互作用假设(即无效模型)下的预期治疗效果,在药物研发过程中探索此类药物联合治疗的潜力。本教程将讨论此类实验的重要设计方面,以便进行适当的统计评估。此外,它还将重点介绍生化直观广义卢韦法(BIGL R 软件包,可在 CRAN 上下载),用于统计检测不同无效模型下的预期偏差。该方法的一个明显优势是可以量化效应大小和置信区间,同时控制方向性错误覆盖率。最后,一个案例研究将展示分析组合实验的工作流程。
{"title":"Synergy detection: A practical guide to statistical assessment of potential drug combinations.","authors":"Elli Makariadou, Xuechen Wang, Nicholas Hein, Negera W Deresa, Kathy Mutambanengwe, Bie Verbist, Olivier Thas","doi":"10.1002/pst.2383","DOIUrl":"https://doi.org/10.1002/pst.2383","url":null,"abstract":"<p><p>Combination treatments have been of increasing importance in drug development across therapeutic areas to improve treatment response, minimize the development of resistance, and/or minimize adverse events. Pre-clinical in-vitro combination experiments aim to explore the potential of such drug combinations during drug discovery by comparing the observed effect of the combination with the expected treatment effect under the assumption of no interaction (i.e., null model). This tutorial will address important design aspects of such experiments to allow proper statistical evaluation. Additionally, it will highlight the Biochemically Intuitive Generalized Loewe methodology (BIGL R package available on CRAN) to statistically detect deviations from the expectation under different null models. A clear advantage of the methodology is the quantification of the effect sizes, together with confidence interval while controlling the directional false coverage rate. Finally, a case study will showcase the workflow in analyzing combination experiments.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140336499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fasheng Li, Beverly Nickerson, Les Van Alstine, Ke Wang
In vitro dissolution testing is a regulatory required critical quality measure for solid dose pharmaceutical drug products. Setting the acceptance criteria to meet compendial criteria is required for a product to be filed and approved for marketing. Statistical approaches for analyzing dissolution data, setting specifications and visualizing results could vary according to product requirements, company's practices, and scientific judgements. This paper provides a general description of the steps taken in the evaluation and setting of in vitro dissolution specifications at release and on stability.
{"title":"Statistical approaches to evaluate in vitro dissolution data against proposed dissolution specifications.","authors":"Fasheng Li, Beverly Nickerson, Les Van Alstine, Ke Wang","doi":"10.1002/pst.2379","DOIUrl":"https://doi.org/10.1002/pst.2379","url":null,"abstract":"<p><p>In vitro dissolution testing is a regulatory required critical quality measure for solid dose pharmaceutical drug products. Setting the acceptance criteria to meet compendial criteria is required for a product to be filed and approved for marketing. Statistical approaches for analyzing dissolution data, setting specifications and visualizing results could vary according to product requirements, company's practices, and scientific judgements. This paper provides a general description of the steps taken in the evaluation and setting of in vitro dissolution specifications at release and on stability.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140143994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-11-14DOI: 10.1002/pst.2349
Francesco Mariani, Fulvio De Santis, Stefania Gubbiotti
Non-inferiority trials compare new experimental therapies to standard ones (active control). In these experiments, historical information on the control treatment is often available. This makes Bayesian methodology appealing since it allows a natural way to exploit information from past studies. In the present paper, we suggest the use of previous data for constructing the prior distribution of the control effect parameter. Specifically, we consider a dynamic power prior that possibly allows to discount the level of borrowing in the presence of heterogeneity between past and current control data. The discount parameter of the prior is based on the Hellinger distance between the posterior distributions of the control parameter based, respectively, on historical and current data. We develop the methodology for comparing normal means and we handle the unknown variance assumption using MCMC. We also provide a simulation study to analyze the proposed test in terms of frequentist size and power, as it is usually requested by regulatory agencies. Finally, we investigate comparisons with some existing methods and we illustrate an application to a real case study.
{"title":"A dynamic power prior approach to non-inferiority trials for normal means.","authors":"Francesco Mariani, Fulvio De Santis, Stefania Gubbiotti","doi":"10.1002/pst.2349","DOIUrl":"10.1002/pst.2349","url":null,"abstract":"<p><p>Non-inferiority trials compare new experimental therapies to standard ones (active control). In these experiments, historical information on the control treatment is often available. This makes Bayesian methodology appealing since it allows a natural way to exploit information from past studies. In the present paper, we suggest the use of previous data for constructing the prior distribution of the control effect parameter. Specifically, we consider a dynamic power prior that possibly allows to discount the level of borrowing in the presence of heterogeneity between past and current control data. The discount parameter of the prior is based on the Hellinger distance between the posterior distributions of the control parameter based, respectively, on historical and current data. We develop the methodology for comparing normal means and we handle the unknown variance assumption using MCMC. We also provide a simulation study to analyze the proposed test in terms of frequentist size and power, as it is usually requested by regulatory agencies. Finally, we investigate comparisons with some existing methods and we illustrate an application to a real case study.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"242-256"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"107591987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-10-23DOI: 10.1002/pst.2344
Richard O Montes
Tolerance intervals from quality attribute measurements are used to establish specification limits for drug products. Some attribute measurements may be below the reporting limits, that is, left-censored data. When data has a long, right-skew tail, a gamma distribution may be applicable. This paper compares maximum likelihood estimation (MLE) and Bayesian methods to estimate shape and scale parameters of censored gamma distributions and to calculate tolerance intervals under varying sample sizes and extents of censoring. The noninformative reference prior and the maximal data information prior (MDIP) are used to compare the impact of prior choice. Metrics used are bias and root mean square error for the parameter estimation and average length and confidence coefficient for the tolerance interval evaluation. It will be shown that Bayesian method using a reference prior overall performs better than MLE for the scenarios evaluated. When sample size is small, the Bayesian method using MDIP yields conservatively too wide tolerance intervals that are unsuitable basis for specification setting. The metrics for all methods worsened with increasing extent of censoring but improved with increasing sample size, as expected. This study demonstrates that although MLE is relatively simple and available in user-friendly statistical software, it falls short in accurately and precisely producing tolerance limits that maintain the stated confidence depending on the scenario. The Bayesian method using noninformative prior, even though computationally intensive and requires considerable statistical programming, produces tolerance limits which are practically useful for specification setting. Real-world examples are provided to illustrate the findings from the simulation study.
{"title":"Frequentist and Bayesian tolerance intervals for setting specification limits for left-censored gamma distributed drug quality attributes.","authors":"Richard O Montes","doi":"10.1002/pst.2344","DOIUrl":"10.1002/pst.2344","url":null,"abstract":"<p><p>Tolerance intervals from quality attribute measurements are used to establish specification limits for drug products. Some attribute measurements may be below the reporting limits, that is, left-censored data. When data has a long, right-skew tail, a gamma distribution may be applicable. This paper compares maximum likelihood estimation (MLE) and Bayesian methods to estimate shape and scale parameters of censored gamma distributions and to calculate tolerance intervals under varying sample sizes and extents of censoring. The noninformative reference prior and the maximal data information prior (MDIP) are used to compare the impact of prior choice. Metrics used are bias and root mean square error for the parameter estimation and average length and confidence coefficient for the tolerance interval evaluation. It will be shown that Bayesian method using a reference prior overall performs better than MLE for the scenarios evaluated. When sample size is small, the Bayesian method using MDIP yields conservatively too wide tolerance intervals that are unsuitable basis for specification setting. The metrics for all methods worsened with increasing extent of censoring but improved with increasing sample size, as expected. This study demonstrates that although MLE is relatively simple and available in user-friendly statistical software, it falls short in accurately and precisely producing tolerance limits that maintain the stated confidence depending on the scenario. The Bayesian method using noninformative prior, even though computationally intensive and requires considerable statistical programming, produces tolerance limits which are practically useful for specification setting. Real-world examples are provided to illustrate the findings from the simulation study.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"168-184"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49691915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-11-02DOI: 10.1002/pst.2346
Andrew P Grieve
In this article, I extend the use of probability of success calculations, previously developed for fixed sample size studies to group sequential designs (GSDs) both for studies planned to be analyzed by standard frequentist techniques or Bayesian approaches. The structure of GSDs lends itself to sequential learning which in turn allows us to consider how knowledge about the result of an interim analysis can influence our assessment of the study's probability of success. In this article, I build on work by Temple and Robertson who introduced the idea of conditional probability of success, an idea which I also treated in a recent monograph.
{"title":"Probability of success and group sequential designs.","authors":"Andrew P Grieve","doi":"10.1002/pst.2346","DOIUrl":"10.1002/pst.2346","url":null,"abstract":"<p><p>In this article, I extend the use of probability of success calculations, previously developed for fixed sample size studies to group sequential designs (GSDs) both for studies planned to be analyzed by standard frequentist techniques or Bayesian approaches. The structure of GSDs lends itself to sequential learning which in turn allows us to consider how knowledge about the result of an interim analysis can influence our assessment of the study's probability of success. In this article, I build on work by Temple and Robertson who introduced the idea of conditional probability of success, an idea which I also treated in a recent monograph.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"185-203"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71425793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-10-14DOI: 10.1002/pst.2342
Anders Granholm, Theis Lange, Michael O Harhay, Aksel Karl Georg Jensen, Anders Perner, Morten Hylander Møller, Benjamin Skov Kaas-Hansen
Different combined outcome-data lags (follow-up durations plus data-collection lags) may affect the performance of adaptive clinical trial designs. We assessed the influence of different outcome-data lags (0-105 days) on the performance of various multi-stage, adaptive trial designs (2/4 arms, with/without a common control, fixed/response-adaptive randomisation) with undesirable binary outcomes according to different inclusion rates (3.33/6.67/10 patients/day) under scenarios with no, small, and large differences. Simulations were conducted under a Bayesian framework, with constant stopping thresholds for superiority/inferiority calibrated to keep type-1 error rates at approximately 5%. We assessed multiple performance metrics, including mean sample sizes, event counts/probabilities, probabilities of conclusiveness, root mean squared errors (RMSEs) of the estimated effect in the selected arms, and RMSEs between the analyses at the time of stopping and the final analyses including data from all randomised patients. Performance metrics generally deteriorated when the proportions of randomised patients with available data were smaller due to longer outcome-data lags or faster inclusion, that is, mean sample sizes, event counts/probabilities, and RMSEs were larger, while the probabilities of conclusiveness were lower. Performance metric impairments with outcome-data lags ≤45 days were relatively smaller compared to those occurring with ≥60 days of lag. For most metrics, the effects of different outcome-data lags and lower proportions of randomised patients with available data were larger than those of different design choices, for example, the use of fixed versus response-adaptive randomisation. Increased outcome-data lag substantially affected the performance of adaptive trial designs. Trialists should consider the effects of outcome-data lags when planning adaptive trials.
{"title":"Effects of duration of follow-up and lag in data collection on the performance of adaptive clinical trials.","authors":"Anders Granholm, Theis Lange, Michael O Harhay, Aksel Karl Georg Jensen, Anders Perner, Morten Hylander Møller, Benjamin Skov Kaas-Hansen","doi":"10.1002/pst.2342","DOIUrl":"10.1002/pst.2342","url":null,"abstract":"<p><p>Different combined outcome-data lags (follow-up durations plus data-collection lags) may affect the performance of adaptive clinical trial designs. We assessed the influence of different outcome-data lags (0-105 days) on the performance of various multi-stage, adaptive trial designs (2/4 arms, with/without a common control, fixed/response-adaptive randomisation) with undesirable binary outcomes according to different inclusion rates (3.33/6.67/10 patients/day) under scenarios with no, small, and large differences. Simulations were conducted under a Bayesian framework, with constant stopping thresholds for superiority/inferiority calibrated to keep type-1 error rates at approximately 5%. We assessed multiple performance metrics, including mean sample sizes, event counts/probabilities, probabilities of conclusiveness, root mean squared errors (RMSEs) of the estimated effect in the selected arms, and RMSEs between the analyses at the time of stopping and the final analyses including data from all randomised patients. Performance metrics generally deteriorated when the proportions of randomised patients with available data were smaller due to longer outcome-data lags or faster inclusion, that is, mean sample sizes, event counts/probabilities, and RMSEs were larger, while the probabilities of conclusiveness were lower. Performance metric impairments with outcome-data lags ≤45 days were relatively smaller compared to those occurring with ≥60 days of lag. For most metrics, the effects of different outcome-data lags and lower proportions of randomised patients with available data were larger than those of different design choices, for example, the use of fixed versus response-adaptive randomisation. Increased outcome-data lag substantially affected the performance of adaptive trial designs. Trialists should consider the effects of outcome-data lags when planning adaptive trials.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"138-150"},"PeriodicalIF":1.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10935606/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41208637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-11-08DOI: 10.1002/pst.2348
Alberto García-Hernandez, Teresa Pérez, María Del Carmen Pardo, Dimitris Rizopoulos
With a treatment policy strategy, therapies are evaluated regardless of the disturbance caused by intercurrent events (ICEs). Implementing this estimand is challenging if subjects are not followed up after the ICE. This circumstance can be dealt with using delta adjustment (DA) or reference-based (RB) imputation. In the survival field, DA and RB imputation have been researched so far using multiple imputation (MI). Here, we present a fully analytical solution. We use the illness-death multistate model with the following transitions: (a) from the initial state to the event of interest, (b) from the initial state to the ICE, and (c) from the ICE to the event. We estimate the intensity function of transitions (a) and (b) using flexible parametric survival models. Transition (c) is assumed unobserved but identifiable using DA or RB imputation assumptions. Various rules have been considered: no ICE effect, DA under proportional hazards (PH) or additive hazards (AH), jump to reference (J2R), and (either PH or AH) copy increment from reference. We obtain the marginal survival curve of interest by calculating, via numerical integration, the probability of transitioning from the initial state to the event of interest regardless of having passed or not by the ICE state. We use the delta method to obtain standard errors (SEs). Finally, we quantify the performance of the proposed estimator through simulations and compare it against MI. Our analytical solution is more efficient than MI and avoids SE misestimation-a known phenomenon associated with Rubin's variance equation.
{"title":"An illness-death multistate model to implement delta adjustment and reference-based imputation with time-to-event endpoints.","authors":"Alberto García-Hernandez, Teresa Pérez, María Del Carmen Pardo, Dimitris Rizopoulos","doi":"10.1002/pst.2348","DOIUrl":"10.1002/pst.2348","url":null,"abstract":"<p><p>With a treatment policy strategy, therapies are evaluated regardless of the disturbance caused by intercurrent events (ICEs). Implementing this estimand is challenging if subjects are not followed up after the ICE. This circumstance can be dealt with using delta adjustment (DA) or reference-based (RB) imputation. In the survival field, DA and RB imputation have been researched so far using multiple imputation (MI). Here, we present a fully analytical solution. We use the illness-death multistate model with the following transitions: (a) from the initial state to the event of interest, (b) from the initial state to the ICE, and (c) from the ICE to the event. We estimate the intensity function of transitions (a) and (b) using flexible parametric survival models. Transition (c) is assumed unobserved but identifiable using DA or RB imputation assumptions. Various rules have been considered: no ICE effect, DA under proportional hazards (PH) or additive hazards (AH), jump to reference (J2R), and (either PH or AH) copy increment from reference. We obtain the marginal survival curve of interest by calculating, via numerical integration, the probability of transitioning from the initial state to the event of interest regardless of having passed or not by the ICE state. We use the delta method to obtain standard errors (SEs). Finally, we quantify the performance of the proposed estimator through simulations and compare it against MI. Our analytical solution is more efficient than MI and avoids SE misestimation-a known phenomenon associated with Rubin's variance equation.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"219-241"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71522338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-11-02DOI: 10.1002/pst.2345
Sandra A Lewis, Kevin J Carroll, Todd DeVries, Jonathan Barratt
Random coefficient (RC) models are commonly used in clinical trials to estimate the rate of change over time in longitudinal data. Trials utilizing a surrogate endpoint for accelerated approval with a confirmatory longitudinal endpoint to show clinical benefit is a strategy implemented across various therapeutic areas, including immunoglobulin A nephropathy. Understanding conditional power (CP) and information fraction calculations of RC models may help in the design of clinical trials as well as provide support for the confirmatory endpoint at the time of accelerated approval. This paper provides calculation methods, with practical examples, for determining CP at an interim analysis for a RC model with longitudinal data, such as estimated glomerular filtration rate (eGFR) assessments to measure rate of change in eGFR slope.
{"title":"Conditional power and information fraction calculations at an interim analysis for random coefficient models.","authors":"Sandra A Lewis, Kevin J Carroll, Todd DeVries, Jonathan Barratt","doi":"10.1002/pst.2345","DOIUrl":"10.1002/pst.2345","url":null,"abstract":"<p><p>Random coefficient (RC) models are commonly used in clinical trials to estimate the rate of change over time in longitudinal data. Trials utilizing a surrogate endpoint for accelerated approval with a confirmatory longitudinal endpoint to show clinical benefit is a strategy implemented across various therapeutic areas, including immunoglobulin A nephropathy. Understanding conditional power (CP) and information fraction calculations of RC models may help in the design of clinical trials as well as provide support for the confirmatory endpoint at the time of accelerated approval. This paper provides calculation methods, with practical examples, for determining CP at an interim analysis for a RC model with longitudinal data, such as estimated glomerular filtration rate (eGFR) assessments to measure rate of change in eGFR slope.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"276-283"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71425792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The propensity score-integrated composite likelihood (PSCL) method is one method that can be utilized to design and analyze an application when real-world data (RWD) are leveraged to augment a prospectively designed clinical study. In the PSCL, strata are formed based on propensity scores (PS) such that similar subjects in terms of the baseline covariates from both the current study and RWD sources are placed in the same stratum, and then composite likelihood method is applied to down-weight the information from the RWD. While PSCL was originally proposed for a fixed design, it can be extended to be applied under an adaptive design framework with the purpose to either potentially claim an early success or to re-estimate the sample size. In this paper, a general strategy is proposed due to the feature of PSCL. For the possibility of claiming early success, Fisher's combination test is utilized. When the purpose is to re-estimate the sample size, the proposed procedure is based on the test proposed by Cui, Hung, and Wang. The implementation of these two procedures is demonstrated via an example.
{"title":"Propensity score-incorporated adaptive design approaches when incorporating real-world data.","authors":"Nelson Lu, Wei-Chen Chen, Heng Li, Changhong Song, Ram Tiwari, Chenguang Wang, Yunling Xu, Lilly Q Yue","doi":"10.1002/pst.2347","DOIUrl":"10.1002/pst.2347","url":null,"abstract":"<p><p>The propensity score-integrated composite likelihood (PSCL) method is one method that can be utilized to design and analyze an application when real-world data (RWD) are leveraged to augment a prospectively designed clinical study. In the PSCL, strata are formed based on propensity scores (PS) such that similar subjects in terms of the baseline covariates from both the current study and RWD sources are placed in the same stratum, and then composite likelihood method is applied to down-weight the information from the RWD. While PSCL was originally proposed for a fixed design, it can be extended to be applied under an adaptive design framework with the purpose to either potentially claim an early success or to re-estimate the sample size. In this paper, a general strategy is proposed due to the feature of PSCL. For the possibility of claiming early success, Fisher's combination test is utilized. When the purpose is to re-estimate the sample size, the proposed procedure is based on the test proposed by Cui, Hung, and Wang. The implementation of these two procedures is demonstrated via an example.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"204-218"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138445736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2023-10-23DOI: 10.1002/pst.2343
Sheng Zhong, Yunzhao Xing, Mengjia Yu, Li Wang
An accurate forecast of a clinical trial enrollment timeline at the planning phase is of great importance to both corporate strategic planning and trial operational excellence. The naive approach often calculates an average enrollment rate from historical data and generates an inaccurate prediction based on a linear trend with the average rate. Under the traditional framework of a Poisson-Gamma model, site activation delays are often modeled with either fixed initiation time or a simple random distribution while incorporating the user-provided site planning information to achieve good forecast accuracy. However, such user-provided information is not available at the early portfolio planning stage. We present a novel statistical approach based on generalized linear mixed-effects models and the use of non-homogeneous Poisson processes through the Bayesian framework to model the country initiation, site activation, and subject enrollment sequentially in a systematic fashion. We validate the performance of our proposed enrollment modeling framework based on a set of 25 preselected studies from four therapeutic areas. Our modeling framework shows a substantial improvement in prediction accuracy in comparison to the traditional statistical approach. Furthermore, we show that our modeling and simulation approach calibrates the data variability appropriately and gives correct coverage rates for prediction intervals of various nominal levels. Finally, we demonstrate the use of our approach to generate the predicted enrollment curves through time with confidence bands overlaid.
{"title":"Enrollment forecast for clinical trials at the portfolio planning phase based on site-level historical data.","authors":"Sheng Zhong, Yunzhao Xing, Mengjia Yu, Li Wang","doi":"10.1002/pst.2343","DOIUrl":"10.1002/pst.2343","url":null,"abstract":"<p><p>An accurate forecast of a clinical trial enrollment timeline at the planning phase is of great importance to both corporate strategic planning and trial operational excellence. The naive approach often calculates an average enrollment rate from historical data and generates an inaccurate prediction based on a linear trend with the average rate. Under the traditional framework of a Poisson-Gamma model, site activation delays are often modeled with either fixed initiation time or a simple random distribution while incorporating the user-provided site planning information to achieve good forecast accuracy. However, such user-provided information is not available at the early portfolio planning stage. We present a novel statistical approach based on generalized linear mixed-effects models and the use of non-homogeneous Poisson processes through the Bayesian framework to model the country initiation, site activation, and subject enrollment sequentially in a systematic fashion. We validate the performance of our proposed enrollment modeling framework based on a set of 25 preselected studies from four therapeutic areas. Our modeling framework shows a substantial improvement in prediction accuracy in comparison to the traditional statistical approach. Furthermore, we show that our modeling and simulation approach calibrates the data variability appropriately and gives correct coverage rates for prediction intervals of various nominal levels. Finally, we demonstrate the use of our approach to generate the predicted enrollment curves through time with confidence bands overlaid.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"151-167"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49691914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}