首页 > 最新文献

Pharmaceutical Statistics最新文献

英文 中文
Synergy detection: A practical guide to statistical assessment of potential drug combinations. 协同作用检测:潜在药物组合统计评估实用指南》。
IF 1.5 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2024-04-02 DOI: 10.1002/pst.2383
Elli Makariadou, Xuechen Wang, Nicholas Hein, Negera W Deresa, Kathy Mutambanengwe, Bie Verbist, Olivier Thas

Combination treatments have been of increasing importance in drug development across therapeutic areas to improve treatment response, minimize the development of resistance, and/or minimize adverse events. Pre-clinical in-vitro combination experiments aim to explore the potential of such drug combinations during drug discovery by comparing the observed effect of the combination with the expected treatment effect under the assumption of no interaction (i.e., null model). This tutorial will address important design aspects of such experiments to allow proper statistical evaluation. Additionally, it will highlight the Biochemically Intuitive Generalized Loewe methodology (BIGL R package available on CRAN) to statistically detect deviations from the expectation under different null models. A clear advantage of the methodology is the quantification of the effect sizes, together with confidence interval while controlling the directional false coverage rate. Finally, a case study will showcase the workflow in analyzing combination experiments.

在各治疗领域的药物研发中,联合疗法的重要性与日俱增,它可以改善治疗反应,最大限度地减少耐药性的产生,和/或最大限度地减少不良反应。临床前体外联合实验旨在通过比较联合治疗的观察效果和无相互作用假设(即无效模型)下的预期治疗效果,在药物研发过程中探索此类药物联合治疗的潜力。本教程将讨论此类实验的重要设计方面,以便进行适当的统计评估。此外,它还将重点介绍生化直观广义卢韦法(BIGL R 软件包,可在 CRAN 上下载),用于统计检测不同无效模型下的预期偏差。该方法的一个明显优势是可以量化效应大小和置信区间,同时控制方向性错误覆盖率。最后,一个案例研究将展示分析组合实验的工作流程。
{"title":"Synergy detection: A practical guide to statistical assessment of potential drug combinations.","authors":"Elli Makariadou, Xuechen Wang, Nicholas Hein, Negera W Deresa, Kathy Mutambanengwe, Bie Verbist, Olivier Thas","doi":"10.1002/pst.2383","DOIUrl":"https://doi.org/10.1002/pst.2383","url":null,"abstract":"<p><p>Combination treatments have been of increasing importance in drug development across therapeutic areas to improve treatment response, minimize the development of resistance, and/or minimize adverse events. Pre-clinical in-vitro combination experiments aim to explore the potential of such drug combinations during drug discovery by comparing the observed effect of the combination with the expected treatment effect under the assumption of no interaction (i.e., null model). This tutorial will address important design aspects of such experiments to allow proper statistical evaluation. Additionally, it will highlight the Biochemically Intuitive Generalized Loewe methodology (BIGL R package available on CRAN) to statistically detect deviations from the expectation under different null models. A clear advantage of the methodology is the quantification of the effect sizes, together with confidence interval while controlling the directional false coverage rate. Finally, a case study will showcase the workflow in analyzing combination experiments.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140336499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical approaches to evaluate in vitro dissolution data against proposed dissolution specifications. 根据建议的溶解规范评估体外溶解数据的统计方法。
IF 1.5 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2024-03-17 DOI: 10.1002/pst.2379
Fasheng Li, Beverly Nickerson, Les Van Alstine, Ke Wang

In vitro dissolution testing is a regulatory required critical quality measure for solid dose pharmaceutical drug products. Setting the acceptance criteria to meet compendial criteria is required for a product to be filed and approved for marketing. Statistical approaches for analyzing dissolution data, setting specifications and visualizing results could vary according to product requirements, company's practices, and scientific judgements. This paper provides a general description of the steps taken in the evaluation and setting of in vitro dissolution specifications at release and on stability.

体外溶出度测试是监管部门要求的固体剂量药物产品的关键质量措施。制定符合药典标准的验收标准是产品申报和批准上市的必要条件。分析溶出度数据、设定规格和可视化结果的统计方法可能因产品要求、公司实践和科学判断而异。本文概括介绍了评估和设定释放和稳定性体外溶出度规格的步骤。
{"title":"Statistical approaches to evaluate in vitro dissolution data against proposed dissolution specifications.","authors":"Fasheng Li, Beverly Nickerson, Les Van Alstine, Ke Wang","doi":"10.1002/pst.2379","DOIUrl":"https://doi.org/10.1002/pst.2379","url":null,"abstract":"<p><p>In vitro dissolution testing is a regulatory required critical quality measure for solid dose pharmaceutical drug products. Setting the acceptance criteria to meet compendial criteria is required for a product to be filed and approved for marketing. Statistical approaches for analyzing dissolution data, setting specifications and visualizing results could vary according to product requirements, company's practices, and scientific judgements. This paper provides a general description of the steps taken in the evaluation and setting of in vitro dissolution specifications at release and on stability.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140143994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A dynamic power prior approach to non-inferiority trials for normal means. 正态均值非劣效性试验的动态功率先验方法。
IF 1.5 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2024-03-01 Epub Date: 2023-11-14 DOI: 10.1002/pst.2349
Francesco Mariani, Fulvio De Santis, Stefania Gubbiotti

Non-inferiority trials compare new experimental therapies to standard ones (active control). In these experiments, historical information on the control treatment is often available. This makes Bayesian methodology appealing since it allows a natural way to exploit information from past studies. In the present paper, we suggest the use of previous data for constructing the prior distribution of the control effect parameter. Specifically, we consider a dynamic power prior that possibly allows to discount the level of borrowing in the presence of heterogeneity between past and current control data. The discount parameter of the prior is based on the Hellinger distance between the posterior distributions of the control parameter based, respectively, on historical and current data. We develop the methodology for comparing normal means and we handle the unknown variance assumption using MCMC. We also provide a simulation study to analyze the proposed test in terms of frequentist size and power, as it is usually requested by regulatory agencies. Finally, we investigate comparisons with some existing methods and we illustrate an application to a real case study.

非劣效性试验将新的实验疗法与标准疗法(主动对照)进行比较。在这些实验中,通常可以获得对照处理的历史信息。这使得贝叶斯方法具有吸引力,因为它允许以一种自然的方式从过去的研究中挖掘信息。在本文中,我们建议使用以前的数据来构造控制效果参数的先验分布。具体来说,我们考虑了一个动态先验,它可能允许在过去和当前控制数据之间存在异质性的情况下贴现借款水平。先验的折扣参数是基于控制参数的后验分布之间的海灵格距离,分别基于历史和当前数据。我们开发了比较正态均值的方法,并使用MCMC处理未知方差假设。根据监管机构的要求,我们还提供了一个模拟研究来分析拟议的测试的频率大小和功率。最后,与现有方法进行了比较,并举例说明了该方法在实际案例中的应用。
{"title":"A dynamic power prior approach to non-inferiority trials for normal means.","authors":"Francesco Mariani, Fulvio De Santis, Stefania Gubbiotti","doi":"10.1002/pst.2349","DOIUrl":"10.1002/pst.2349","url":null,"abstract":"<p><p>Non-inferiority trials compare new experimental therapies to standard ones (active control). In these experiments, historical information on the control treatment is often available. This makes Bayesian methodology appealing since it allows a natural way to exploit information from past studies. In the present paper, we suggest the use of previous data for constructing the prior distribution of the control effect parameter. Specifically, we consider a dynamic power prior that possibly allows to discount the level of borrowing in the presence of heterogeneity between past and current control data. The discount parameter of the prior is based on the Hellinger distance between the posterior distributions of the control parameter based, respectively, on historical and current data. We develop the methodology for comparing normal means and we handle the unknown variance assumption using MCMC. We also provide a simulation study to analyze the proposed test in terms of frequentist size and power, as it is usually requested by regulatory agencies. Finally, we investigate comparisons with some existing methods and we illustrate an application to a real case study.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"242-256"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"107591987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequentist and Bayesian tolerance intervals for setting specification limits for left-censored gamma distributed drug quality attributes. 用于设置左删失伽马分布药物质量属性的规格限值的Frequencist和Bayesian容差区间。
IF 1.5 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2024-03-01 Epub Date: 2023-10-23 DOI: 10.1002/pst.2344
Richard O Montes

Tolerance intervals from quality attribute measurements are used to establish specification limits for drug products. Some attribute measurements may be below the reporting limits, that is, left-censored data. When data has a long, right-skew tail, a gamma distribution may be applicable. This paper compares maximum likelihood estimation (MLE) and Bayesian methods to estimate shape and scale parameters of censored gamma distributions and to calculate tolerance intervals under varying sample sizes and extents of censoring. The noninformative reference prior and the maximal data information prior (MDIP) are used to compare the impact of prior choice. Metrics used are bias and root mean square error for the parameter estimation and average length and confidence coefficient for the tolerance interval evaluation. It will be shown that Bayesian method using a reference prior overall performs better than MLE for the scenarios evaluated. When sample size is small, the Bayesian method using MDIP yields conservatively too wide tolerance intervals that are unsuitable basis for specification setting. The metrics for all methods worsened with increasing extent of censoring but improved with increasing sample size, as expected. This study demonstrates that although MLE is relatively simple and available in user-friendly statistical software, it falls short in accurately and precisely producing tolerance limits that maintain the stated confidence depending on the scenario. The Bayesian method using noninformative prior, even though computationally intensive and requires considerable statistical programming, produces tolerance limits which are practically useful for specification setting. Real-world examples are provided to illustrate the findings from the simulation study.

质量属性测量的公差区间用于确定药品的规格限制。某些属性测量可能低于报告限制,即左删失数据。当数据具有长的右偏斜尾部时,伽马分布可能适用。本文比较了最大似然估计(MLE)和贝叶斯方法来估计截尾伽玛分布的形状和尺度参数,并计算不同样本量和截尾程度下的容许区间。非形成性参考先验和最大数据信息先验(MDIP)用于比较先验选择的影响。使用的度量是参数估计的偏差和均方根误差,以及公差区间评估的平均长度和置信系数。结果表明,对于所评估的场景,使用参考先验的贝叶斯方法总体上优于MLE。当样本量较小时,使用MDIP的贝叶斯方法保守地产生过宽的公差区间,这不适合作为规范设置的基础。正如预期的那样,所有方法的指标都随着审查程度的增加而恶化,但随着样本量的增加而改善。这项研究表明,尽管MLE在用户友好的统计软件中相对简单且可用,但它在准确、准确地产生保持所述置信度的容限方面存在不足,具体取决于场景。使用非形成先验的贝叶斯方法,即使计算密集且需要大量的统计编程,也会产生对规范设置实际有用的容差极限。提供了真实世界的例子来说明模拟研究的发现。
{"title":"Frequentist and Bayesian tolerance intervals for setting specification limits for left-censored gamma distributed drug quality attributes.","authors":"Richard O Montes","doi":"10.1002/pst.2344","DOIUrl":"10.1002/pst.2344","url":null,"abstract":"<p><p>Tolerance intervals from quality attribute measurements are used to establish specification limits for drug products. Some attribute measurements may be below the reporting limits, that is, left-censored data. When data has a long, right-skew tail, a gamma distribution may be applicable. This paper compares maximum likelihood estimation (MLE) and Bayesian methods to estimate shape and scale parameters of censored gamma distributions and to calculate tolerance intervals under varying sample sizes and extents of censoring. The noninformative reference prior and the maximal data information prior (MDIP) are used to compare the impact of prior choice. Metrics used are bias and root mean square error for the parameter estimation and average length and confidence coefficient for the tolerance interval evaluation. It will be shown that Bayesian method using a reference prior overall performs better than MLE for the scenarios evaluated. When sample size is small, the Bayesian method using MDIP yields conservatively too wide tolerance intervals that are unsuitable basis for specification setting. The metrics for all methods worsened with increasing extent of censoring but improved with increasing sample size, as expected. This study demonstrates that although MLE is relatively simple and available in user-friendly statistical software, it falls short in accurately and precisely producing tolerance limits that maintain the stated confidence depending on the scenario. The Bayesian method using noninformative prior, even though computationally intensive and requires considerable statistical programming, produces tolerance limits which are practically useful for specification setting. Real-world examples are provided to illustrate the findings from the simulation study.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"168-184"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49691915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probability of success and group sequential designs. 成功概率和分组顺序设计。
IF 1.5 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2024-03-01 Epub Date: 2023-11-02 DOI: 10.1002/pst.2346
Andrew P Grieve

In this article, I extend the use of probability of success calculations, previously developed for fixed sample size studies to group sequential designs (GSDs) both for studies planned to be analyzed by standard frequentist techniques or Bayesian approaches. The structure of GSDs lends itself to sequential learning which in turn allows us to consider how knowledge about the result of an interim analysis can influence our assessment of the study's probability of success. In this article, I build on work by Temple and Robertson who introduced the idea of conditional probability of success, an idea which I also treated in a recent monograph.

在这篇文章中,我将先前为固定样本量研究开发的成功概率计算的使用扩展到分组序列设计(GSD),这两种设计都用于计划通过标准频率分析技术或贝叶斯方法进行分析的研究。GSD的结构有助于顺序学习,这反过来又使我们能够考虑关于中期分析结果的知识如何影响我们对研究成功概率的评估。在这篇文章中,我以Temple和Robertson的工作为基础,他们介绍了成功的条件概率的概念,我在最近的一本专著中也谈到了这一概念。
{"title":"Probability of success and group sequential designs.","authors":"Andrew P Grieve","doi":"10.1002/pst.2346","DOIUrl":"10.1002/pst.2346","url":null,"abstract":"<p><p>In this article, I extend the use of probability of success calculations, previously developed for fixed sample size studies to group sequential designs (GSDs) both for studies planned to be analyzed by standard frequentist techniques or Bayesian approaches. The structure of GSDs lends itself to sequential learning which in turn allows us to consider how knowledge about the result of an interim analysis can influence our assessment of the study's probability of success. In this article, I build on work by Temple and Robertson who introduced the idea of conditional probability of success, an idea which I also treated in a recent monograph.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"185-203"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71425793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of duration of follow-up and lag in data collection on the performance of adaptive clinical trials. 随访持续时间和数据收集滞后对适应性临床试验表现的影响。
IF 1.3 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2024-03-01 Epub Date: 2023-10-14 DOI: 10.1002/pst.2342
Anders Granholm, Theis Lange, Michael O Harhay, Aksel Karl Georg Jensen, Anders Perner, Morten Hylander Møller, Benjamin Skov Kaas-Hansen

Different combined outcome-data lags (follow-up durations plus data-collection lags) may affect the performance of adaptive clinical trial designs. We assessed the influence of different outcome-data lags (0-105 days) on the performance of various multi-stage, adaptive trial designs (2/4 arms, with/without a common control, fixed/response-adaptive randomisation) with undesirable binary outcomes according to different inclusion rates (3.33/6.67/10 patients/day) under scenarios with no, small, and large differences. Simulations were conducted under a Bayesian framework, with constant stopping thresholds for superiority/inferiority calibrated to keep type-1 error rates at approximately 5%. We assessed multiple performance metrics, including mean sample sizes, event counts/probabilities, probabilities of conclusiveness, root mean squared errors (RMSEs) of the estimated effect in the selected arms, and RMSEs between the analyses at the time of stopping and the final analyses including data from all randomised patients. Performance metrics generally deteriorated when the proportions of randomised patients with available data were smaller due to longer outcome-data lags or faster inclusion, that is, mean sample sizes, event counts/probabilities, and RMSEs were larger, while the probabilities of conclusiveness were lower. Performance metric impairments with outcome-data lags ≤45 days were relatively smaller compared to those occurring with ≥60 days of lag. For most metrics, the effects of different outcome-data lags and lower proportions of randomised patients with available data were larger than those of different design choices, for example, the use of fixed versus response-adaptive randomisation. Increased outcome-data lag substantially affected the performance of adaptive trial designs. Trialists should consider the effects of outcome-data lags when planning adaptive trials.

不同的综合结果数据滞后(随访持续时间加上数据收集滞后)可能会影响适应性临床试验设计的性能。我们评估了不同结果数据滞后(0-105 天)对各种多阶段自适应试验设计(2/4组,有/没有共同对照,固定/反应自适应随机化)的性能的影响,根据不同的纳入率(3.33/6.67/10名患者/天),在没有、小和大差异的情况下,具有不期望的二元结果。在贝叶斯框架下进行模拟,校准优势/劣势的恒定停止阈值,以将1型错误率保持在约5%。我们评估了多个性能指标,包括平均样本量、事件计数/概率、结论性概率、所选组中估计效果的均方根误差(RMSE),以及停止时的分析与最终分析之间的均方根错误,包括来自所有随机患者的数据。由于结果数据滞后时间较长或纳入速度较快,具有可用数据的随机患者比例较小时,绩效指标通常会恶化,即平均样本量、事件计数/概率和RMSE较大,而结论性概率较低。结果数据滞后的绩效指标损伤≤45 与≥60的天数相比,天数相对较小 滞后天数。对于大多数指标,不同结果数据滞后和具有可用数据的随机患者比例较低的影响大于不同设计选择的影响,例如,使用固定与反应自适应随机化。结果数据滞后的增加显著影响了适应性试验设计的性能。Trialist在规划适应性试验时应考虑结果数据滞后的影响。
{"title":"Effects of duration of follow-up and lag in data collection on the performance of adaptive clinical trials.","authors":"Anders Granholm, Theis Lange, Michael O Harhay, Aksel Karl Georg Jensen, Anders Perner, Morten Hylander Møller, Benjamin Skov Kaas-Hansen","doi":"10.1002/pst.2342","DOIUrl":"10.1002/pst.2342","url":null,"abstract":"<p><p>Different combined outcome-data lags (follow-up durations plus data-collection lags) may affect the performance of adaptive clinical trial designs. We assessed the influence of different outcome-data lags (0-105 days) on the performance of various multi-stage, adaptive trial designs (2/4 arms, with/without a common control, fixed/response-adaptive randomisation) with undesirable binary outcomes according to different inclusion rates (3.33/6.67/10 patients/day) under scenarios with no, small, and large differences. Simulations were conducted under a Bayesian framework, with constant stopping thresholds for superiority/inferiority calibrated to keep type-1 error rates at approximately 5%. We assessed multiple performance metrics, including mean sample sizes, event counts/probabilities, probabilities of conclusiveness, root mean squared errors (RMSEs) of the estimated effect in the selected arms, and RMSEs between the analyses at the time of stopping and the final analyses including data from all randomised patients. Performance metrics generally deteriorated when the proportions of randomised patients with available data were smaller due to longer outcome-data lags or faster inclusion, that is, mean sample sizes, event counts/probabilities, and RMSEs were larger, while the probabilities of conclusiveness were lower. Performance metric impairments with outcome-data lags ≤45 days were relatively smaller compared to those occurring with ≥60 days of lag. For most metrics, the effects of different outcome-data lags and lower proportions of randomised patients with available data were larger than those of different design choices, for example, the use of fixed versus response-adaptive randomisation. Increased outcome-data lag substantially affected the performance of adaptive trial designs. Trialists should consider the effects of outcome-data lags when planning adaptive trials.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"138-150"},"PeriodicalIF":1.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10935606/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41208637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An illness-death multistate model to implement delta adjustment and reference-based imputation with time-to-event endpoints. 一个疾病死亡多状态模型,用于实现基于时间到事件终点的德尔塔调整和参考插补。
IF 1.5 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2024-03-01 Epub Date: 2023-11-08 DOI: 10.1002/pst.2348
Alberto García-Hernandez, Teresa Pérez, María Del Carmen Pardo, Dimitris Rizopoulos

With a treatment policy strategy, therapies are evaluated regardless of the disturbance caused by intercurrent events (ICEs). Implementing this estimand is challenging if subjects are not followed up after the ICE. This circumstance can be dealt with using delta adjustment (DA) or reference-based (RB) imputation. In the survival field, DA and RB imputation have been researched so far using multiple imputation (MI). Here, we present a fully analytical solution. We use the illness-death multistate model with the following transitions: (a) from the initial state to the event of interest, (b) from the initial state to the ICE, and (c) from the ICE to the event. We estimate the intensity function of transitions (a) and (b) using flexible parametric survival models. Transition (c) is assumed unobserved but identifiable using DA or RB imputation assumptions. Various rules have been considered: no ICE effect, DA under proportional hazards (PH) or additive hazards (AH), jump to reference (J2R), and (either PH or AH) copy increment from reference. We obtain the marginal survival curve of interest by calculating, via numerical integration, the probability of transitioning from the initial state to the event of interest regardless of having passed or not by the ICE state. We use the delta method to obtain standard errors (SEs). Finally, we quantify the performance of the proposed estimator through simulations and compare it against MI. Our analytical solution is more efficient than MI and avoids SE misestimation-a known phenomenon associated with Rubin's variance equation.

根据治疗策略,无论并发事件(ICEs)引起的干扰如何,都会对治疗进行评估。如果受试者在ICE后没有得到随访,那么实施这一评估要求是具有挑战性的。这种情况可以使用德尔塔调整(DA)或基于参考(RB)的插补来处理。在生存领域,迄今为止,已经使用多重插补(MI)对DA和RB插补进行了研究。在这里,我们提出了一个完全分析的解决方案。我们使用具有以下转变的疾病-死亡多状态模型:(a)从初始状态到感兴趣的事件,(b)从初始态到ICE,以及(c)从ICE到事件。我们使用灵活的参数生存模型来估计跃迁(a)和(b)的强度函数。假设过渡(c)未观察到,但使用DA或RB插补假设可识别。已经考虑了各种规则:无ICE效应、比例危险(PH)或加性危险(AH)下的DA、跳转到参考(J2R)以及(PH或AH)从参考复制增量。我们通过数值积分计算从初始状态转换到感兴趣事件的概率,获得感兴趣的边际生存曲线,无论是否通过ICE状态。我们使用delta方法来获得标准误差(SE)。最后,我们通过模拟量化了所提出的估计器的性能,并将其与MI进行了比较。我们的分析解决方案比MI更有效,避免了SE错误估计——这是与鲁宾方差方程相关的已知现象。
{"title":"An illness-death multistate model to implement delta adjustment and reference-based imputation with time-to-event endpoints.","authors":"Alberto García-Hernandez, Teresa Pérez, María Del Carmen Pardo, Dimitris Rizopoulos","doi":"10.1002/pst.2348","DOIUrl":"10.1002/pst.2348","url":null,"abstract":"<p><p>With a treatment policy strategy, therapies are evaluated regardless of the disturbance caused by intercurrent events (ICEs). Implementing this estimand is challenging if subjects are not followed up after the ICE. This circumstance can be dealt with using delta adjustment (DA) or reference-based (RB) imputation. In the survival field, DA and RB imputation have been researched so far using multiple imputation (MI). Here, we present a fully analytical solution. We use the illness-death multistate model with the following transitions: (a) from the initial state to the event of interest, (b) from the initial state to the ICE, and (c) from the ICE to the event. We estimate the intensity function of transitions (a) and (b) using flexible parametric survival models. Transition (c) is assumed unobserved but identifiable using DA or RB imputation assumptions. Various rules have been considered: no ICE effect, DA under proportional hazards (PH) or additive hazards (AH), jump to reference (J2R), and (either PH or AH) copy increment from reference. We obtain the marginal survival curve of interest by calculating, via numerical integration, the probability of transitioning from the initial state to the event of interest regardless of having passed or not by the ICE state. We use the delta method to obtain standard errors (SEs). Finally, we quantify the performance of the proposed estimator through simulations and compare it against MI. Our analytical solution is more efficient than MI and avoids SE misestimation-a known phenomenon associated with Rubin's variance equation.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"219-241"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71522338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional power and information fraction calculations at an interim analysis for random coefficient models. 随机系数模型的中期分析中的条件功率和信息分数计算。
IF 1.5 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2024-03-01 Epub Date: 2023-11-02 DOI: 10.1002/pst.2345
Sandra A Lewis, Kevin J Carroll, Todd DeVries, Jonathan Barratt

Random coefficient (RC) models are commonly used in clinical trials to estimate the rate of change over time in longitudinal data. Trials utilizing a surrogate endpoint for accelerated approval with a confirmatory longitudinal endpoint to show clinical benefit is a strategy implemented across various therapeutic areas, including immunoglobulin A nephropathy. Understanding conditional power (CP) and information fraction calculations of RC models may help in the design of clinical trials as well as provide support for the confirmatory endpoint at the time of accelerated approval. This paper provides calculation methods, with practical examples, for determining CP at an interim analysis for a RC model with longitudinal data, such as estimated glomerular filtration rate (eGFR) assessments to measure rate of change in eGFR slope.

随机系数(RC)模型通常用于临床试验,以估计纵向数据随时间的变化率。利用替代终点和验证性纵向终点加速批准以显示临床益处的试验是一种在各种治疗领域实施的策略,包括免疫球蛋白a肾病。了解RC模型的条件功率(CP)和信息分数计算可能有助于临床试验的设计,并在加速批准时为验证性终点提供支持。本文提供了在具有纵向数据的RC模型的中期分析中确定CP的计算方法和实例,例如估计肾小球滤过率(eGFR)评估,以测量eGFR斜率的变化率。
{"title":"Conditional power and information fraction calculations at an interim analysis for random coefficient models.","authors":"Sandra A Lewis, Kevin J Carroll, Todd DeVries, Jonathan Barratt","doi":"10.1002/pst.2345","DOIUrl":"10.1002/pst.2345","url":null,"abstract":"<p><p>Random coefficient (RC) models are commonly used in clinical trials to estimate the rate of change over time in longitudinal data. Trials utilizing a surrogate endpoint for accelerated approval with a confirmatory longitudinal endpoint to show clinical benefit is a strategy implemented across various therapeutic areas, including immunoglobulin A nephropathy. Understanding conditional power (CP) and information fraction calculations of RC models may help in the design of clinical trials as well as provide support for the confirmatory endpoint at the time of accelerated approval. This paper provides calculation methods, with practical examples, for determining CP at an interim analysis for a RC model with longitudinal data, such as estimated glomerular filtration rate (eGFR) assessments to measure rate of change in eGFR slope.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"276-283"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71425792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Propensity score-incorporated adaptive design approaches when incorporating real-world data. 倾向得分纳入自适应设计方法时,纳入现实世界的数据。
IF 1.5 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2024-03-01 Epub Date: 2023-11-28 DOI: 10.1002/pst.2347
Nelson Lu, Wei-Chen Chen, Heng Li, Changhong Song, Ram Tiwari, Chenguang Wang, Yunling Xu, Lilly Q Yue

The propensity score-integrated composite likelihood (PSCL) method is one method that can be utilized to design and analyze an application when real-world data (RWD) are leveraged to augment a prospectively designed clinical study. In the PSCL, strata are formed based on propensity scores (PS) such that similar subjects in terms of the baseline covariates from both the current study and RWD sources are placed in the same stratum, and then composite likelihood method is applied to down-weight the information from the RWD. While PSCL was originally proposed for a fixed design, it can be extended to be applied under an adaptive design framework with the purpose to either potentially claim an early success or to re-estimate the sample size. In this paper, a general strategy is proposed due to the feature of PSCL. For the possibility of claiming early success, Fisher's combination test is utilized. When the purpose is to re-estimate the sample size, the proposed procedure is based on the test proposed by Cui, Hung, and Wang. The implementation of these two procedures is demonstrated via an example.

倾向得分整合复合似然(PSCL)方法是一种可用于设计和分析应用程序的方法,当利用真实世界数据(RWD)来增强前瞻性设计的临床研究时。在PSCL中,地层是基于倾向分数(PS)形成的,这样就可以将来自当前研究和RWD来源的基线协变量方面的相似对象放置在同一地层中,然后使用复合似然方法来降低RWD信息的权重。虽然PSCL最初是针对固定设计提出的,但它可以扩展到自适应设计框架下的应用,目的是潜在地声称早期成功或重新估计样本量。本文针对PSCL的特点,提出了一种通用策略。对于声称早期成功的可能性,使用Fisher的组合试验。当目的是重新估计样本量时,建议的程序是基于Cui, Hung和Wang提出的检验。通过实例说明了这两个过程的实现。
{"title":"Propensity score-incorporated adaptive design approaches when incorporating real-world data.","authors":"Nelson Lu, Wei-Chen Chen, Heng Li, Changhong Song, Ram Tiwari, Chenguang Wang, Yunling Xu, Lilly Q Yue","doi":"10.1002/pst.2347","DOIUrl":"10.1002/pst.2347","url":null,"abstract":"<p><p>The propensity score-integrated composite likelihood (PSCL) method is one method that can be utilized to design and analyze an application when real-world data (RWD) are leveraged to augment a prospectively designed clinical study. In the PSCL, strata are formed based on propensity scores (PS) such that similar subjects in terms of the baseline covariates from both the current study and RWD sources are placed in the same stratum, and then composite likelihood method is applied to down-weight the information from the RWD. While PSCL was originally proposed for a fixed design, it can be extended to be applied under an adaptive design framework with the purpose to either potentially claim an early success or to re-estimate the sample size. In this paper, a general strategy is proposed due to the feature of PSCL. For the possibility of claiming early success, Fisher's combination test is utilized. When the purpose is to re-estimate the sample size, the proposed procedure is based on the test proposed by Cui, Hung, and Wang. The implementation of these two procedures is demonstrated via an example.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"204-218"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138445736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enrollment forecast for clinical trials at the portfolio planning phase based on site-level historical data. 基于站点级历史数据的投资组合规划阶段临床试验的入组预测。
IF 1.5 4区 医学 Q4 PHARMACOLOGY & PHARMACY Pub Date : 2024-03-01 Epub Date: 2023-10-23 DOI: 10.1002/pst.2343
Sheng Zhong, Yunzhao Xing, Mengjia Yu, Li Wang

An accurate forecast of a clinical trial enrollment timeline at the planning phase is of great importance to both corporate strategic planning and trial operational excellence. The naive approach often calculates an average enrollment rate from historical data and generates an inaccurate prediction based on a linear trend with the average rate. Under the traditional framework of a Poisson-Gamma model, site activation delays are often modeled with either fixed initiation time or a simple random distribution while incorporating the user-provided site planning information to achieve good forecast accuracy. However, such user-provided information is not available at the early portfolio planning stage. We present a novel statistical approach based on generalized linear mixed-effects models and the use of non-homogeneous Poisson processes through the Bayesian framework to model the country initiation, site activation, and subject enrollment sequentially in a systematic fashion. We validate the performance of our proposed enrollment modeling framework based on a set of 25 preselected studies from four therapeutic areas. Our modeling framework shows a substantial improvement in prediction accuracy in comparison to the traditional statistical approach. Furthermore, we show that our modeling and simulation approach calibrates the data variability appropriately and gives correct coverage rates for prediction intervals of various nominal levels. Finally, we demonstrate the use of our approach to generate the predicted enrollment curves through time with confidence bands overlaid.

在规划阶段准确预测临床试验注册时间表对企业战略规划和卓越的试验运营都非常重要。天真方法通常根据历史数据计算平均入学率,并基于平均入学率的线性趋势生成不准确的预测。在Poisson-Gamma模型的传统框架下,站点激活延迟通常采用固定的启动时间或简单的随机分布进行建模,同时结合用户提供的站点规划信息以实现良好的预测准确性。然而,在早期投资组合规划阶段,这种用户提供的信息是不可用的。我们提出了一种新的统计方法,该方法基于广义线性混合效应模型,并通过贝叶斯框架使用非齐次泊松过程,以系统的方式依次对国家启动、站点激活和受试者注册进行建模。基于来自四个治疗领域的25项预选研究,我们验证了我们提出的注册建模框架的性能。与传统的统计方法相比,我们的建模框架显示出预测精度的显著提高。此外,我们还表明,我们的建模和模拟方法适当地校准了数据可变性,并为各种标称水平的预测区间提供了正确的覆盖率。最后,我们演示了使用我们的方法生成随时间变化的预测入组曲线,其中置信带重叠。
{"title":"Enrollment forecast for clinical trials at the portfolio planning phase based on site-level historical data.","authors":"Sheng Zhong, Yunzhao Xing, Mengjia Yu, Li Wang","doi":"10.1002/pst.2343","DOIUrl":"10.1002/pst.2343","url":null,"abstract":"<p><p>An accurate forecast of a clinical trial enrollment timeline at the planning phase is of great importance to both corporate strategic planning and trial operational excellence. The naive approach often calculates an average enrollment rate from historical data and generates an inaccurate prediction based on a linear trend with the average rate. Under the traditional framework of a Poisson-Gamma model, site activation delays are often modeled with either fixed initiation time or a simple random distribution while incorporating the user-provided site planning information to achieve good forecast accuracy. However, such user-provided information is not available at the early portfolio planning stage. We present a novel statistical approach based on generalized linear mixed-effects models and the use of non-homogeneous Poisson processes through the Bayesian framework to model the country initiation, site activation, and subject enrollment sequentially in a systematic fashion. We validate the performance of our proposed enrollment modeling framework based on a set of 25 preselected studies from four therapeutic areas. Our modeling framework shows a substantial improvement in prediction accuracy in comparison to the traditional statistical approach. Furthermore, we show that our modeling and simulation approach calibrates the data variability appropriately and gives correct coverage rates for prediction intervals of various nominal levels. Finally, we demonstrate the use of our approach to generate the predicted enrollment curves through time with confidence bands overlaid.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"151-167"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49691914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pharmaceutical Statistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1