The topic of this article is pre-posterior distributions of success or failure. These distributions, determined before a study is run and based on all our assumptions, are what we should believe about the treatment effect if we are told only that the study has been successful, or unsuccessful. I show how the pre-posterior distributions of success and failure can be used during the planning phase of a study to investigate whether the study is able to discriminate between effective and ineffective treatments. I show how these distributions are linked to the probability of success (PoS), or failure, and how they can be determined from simulations if standard asymptotic normality assumptions are inappropriate. I show the link to the concept of the conditional introduced by Temple and Robertson in the context of the planning of multiple studies. Finally, I show that they can also be constructed regardless of whether the analysis of the study is frequentist or fully Bayesian.
本文的主题是成功或失败的前置分布。这些分布是在研究开始前根据我们的所有假设确定的,如果我们只被告知研究已经成功或不成功,我们就应该相信治疗效果。我将展示如何在研究的规划阶段利用成功和失败的前后分布来调查研究是否能够区分有效和无效的治疗方法。我展示了这些分布如何与成功概率(PoS)或失败概率相关联,以及如果标准渐近正态假设不合适,如何通过模拟确定这些分布。我还展示了与 Temple 和 Robertson 在规划多项研究时提出的条件 P o S $ PoS $ 概念之间的联系。最后,我还说明,无论研究分析是频数分析还是完全贝叶斯分析,都可以构建条件 P o S $ PoS$。
{"title":"Pre-Posterior Distributions in Drug Development and Their Properties.","authors":"Andrew P Grieve","doi":"10.1002/pst.2450","DOIUrl":"https://doi.org/10.1002/pst.2450","url":null,"abstract":"<p><p>The topic of this article is pre-posterior distributions of success or failure. These distributions, determined before a study is run and based on all our assumptions, are what we should believe about the treatment effect if we are told only that the study has been successful, or unsuccessful. I show how the pre-posterior distributions of success and failure can be used during the planning phase of a study to investigate whether the study is able to discriminate between effective and ineffective treatments. I show how these distributions are linked to the probability of success (PoS), or failure, and how they can be determined from simulations if standard asymptotic normality assumptions are inappropriate. I show the link to the concept of the conditional <math> <semantics><mrow><mi>P</mi> <mi>o</mi> <mi>S</mi></mrow> <annotation>$$ PoS $$</annotation></semantics> </math> introduced by Temple and Robertson in the context of the planning of multiple studies. Finally, I show that they can also be constructed regardless of whether the analysis of the study is frequentist or fully Bayesian.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142716661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Jackson, Fanni Zhang, Carl-Fredrik Burman, Linda Sharples
The number of clinical trials that include a binary biomarker in design and analysis has risen due to the advent of personalised medicine. This presents challenges for medical decision makers because a drug may confer a stronger effect in the biomarker positive group, and so be approved either in this subgroup alone or in the all-comer population. We develop and evaluate Bayesian methods that can be used to assess this. All our methods are based on the same statistical model for the observed data but we propose different prior specifications to express differing degrees of knowledge about the extent to which the treatment may be more effective in one subgroup than the other. We illustrate our methods using some real examples. We also show how our methodology is useful when designing trials where the size of the biomarker negative subgroup is to be determined. We conclude that our Bayesian framework is a natural tool for making decisions, for example, whether to recommend using the treatment in the biomarker negative subgroup where the treatment is less likely to be efficacious, or determining the number of biomarker positive and negative patients to include when designing a trial.
{"title":"Bayesian Solutions for Assessing Differential Effects in Biomarker Positive and Negative Subgroups.","authors":"Dan Jackson, Fanni Zhang, Carl-Fredrik Burman, Linda Sharples","doi":"10.1002/pst.2456","DOIUrl":"https://doi.org/10.1002/pst.2456","url":null,"abstract":"<p><p>The number of clinical trials that include a binary biomarker in design and analysis has risen due to the advent of personalised medicine. This presents challenges for medical decision makers because a drug may confer a stronger effect in the biomarker positive group, and so be approved either in this subgroup alone or in the all-comer population. We develop and evaluate Bayesian methods that can be used to assess this. All our methods are based on the same statistical model for the observed data but we propose different prior specifications to express differing degrees of knowledge about the extent to which the treatment may be more effective in one subgroup than the other. We illustrate our methods using some real examples. We also show how our methodology is useful when designing trials where the size of the biomarker negative subgroup is to be determined. We conclude that our Bayesian framework is a natural tool for making decisions, for example, whether to recommend using the treatment in the biomarker negative subgroup where the treatment is less likely to be efficacious, or determining the number of biomarker positive and negative patients to include when designing a trial.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142716656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The results of randomized clinical trials (RCTs) are frequently assessed with the fragility index (FI). Although the information provided by FI may supplement the p value, this indicator presents intrinsic weaknesses and shortcomings. In this article, we establish an analysis of fragility within a broader framework so that it can reliably complement the information provided by the p value. This perspective is named the analysis of strength. We first propose a new strength index (SI), which can be adopted in normal distribution settings. This measure can be obtained for both significance and nonsignificance and is straightforward to calculate, thus presenting compelling advantages over FI, starting from the presence of a threshold. The case of time-to-event outcomes is also addressed. Then, beyond the p value, we develop the analysis of strength using likelihood ratios from Royall's statistical evidence viewpoint. A new R package is provided for performing strength calculations, and a simulation study is conducted to explore the behavior of SI and the likelihood-based indicator empirically across different settings. The newly proposed analysis of strength is applied in the assessment of the results of three recent trials involving the treatment of COVID-19.
随机临床试验(RCT)的结果经常使用脆性指数(FI)进行评估。虽然脆性指数提供的信息可以补充 p 值的不足,但这一指标存在固有的弱点和缺陷。在本文中,我们将在一个更广泛的框架内建立脆性分析,使其能够可靠地补充 p 值提供的信息。这一视角被命名为强度分析。我们首先提出了一种新的强度指数(SI),可在正态分布环境中采用。该指标既可用于显著性分析,也可用于非显著性分析,而且计算简便,因此与 FI 相比,从阈值的存在开始,就具有令人信服的优势。我们还讨论了时间到事件结果的情况。然后,除了 p 值之外,我们还从 Royall 的统计证据观点出发,使用似然比对强度进行了分析。我们提供了一个新的 R 软件包来进行强度计算,并开展了一项模拟研究来探索 SI 和基于似然比的指标在不同环境下的经验行为。新提出的强度分析被应用于评估最近三项涉及 COVID-19 治疗的试验结果。
{"title":"Beyond the Fragility Index.","authors":"Piero Quatto, Enrico Ripamonti, Donata Marasini","doi":"10.1002/pst.2452","DOIUrl":"https://doi.org/10.1002/pst.2452","url":null,"abstract":"<p><p>The results of randomized clinical trials (RCTs) are frequently assessed with the fragility index (FI). Although the information provided by FI may supplement the p value, this indicator presents intrinsic weaknesses and shortcomings. In this article, we establish an analysis of fragility within a broader framework so that it can reliably complement the information provided by the p value. This perspective is named the analysis of strength. We first propose a new strength index (SI), which can be adopted in normal distribution settings. This measure can be obtained for both significance and nonsignificance and is straightforward to calculate, thus presenting compelling advantages over FI, starting from the presence of a threshold. The case of time-to-event outcomes is also addressed. Then, beyond the p value, we develop the analysis of strength using likelihood ratios from Royall's statistical evidence viewpoint. A new R package is provided for performing strength calculations, and a simulation study is conducted to explore the behavior of SI and the likelihood-based indicator empirically across different settings. The newly proposed analysis of strength is applied in the assessment of the results of three recent trials involving the treatment of COVID-19.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142687990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Precision medicine is the future of drug development, and subgroup identification plays a critical role in achieving the goal. In this paper, we propose a powerful end-to-end solution squant (available on CRAN) that explores a sequence of quantitative objectives. The method converts the original study to an artificial 1:1 randomized trial, and features a flexible objective function, a stable signature with good interpretability, and an embedded false discovery rate (FDR) control. We demonstrate its performance through simulation and provide a real data example.
{"title":"Subgroup Identification Based on Quantitative Objectives.","authors":"Yan Sun, A S Hedayat","doi":"10.1002/pst.2455","DOIUrl":"https://doi.org/10.1002/pst.2455","url":null,"abstract":"<p><p>Precision medicine is the future of drug development, and subgroup identification plays a critical role in achieving the goal. In this paper, we propose a powerful end-to-end solution squant (available on CRAN) that explores a sequence of quantitative objectives. The method converts the original study to an artificial 1:1 randomized trial, and features a flexible objective function, a stable signature with good interpretability, and an embedded false discovery rate (FDR) control. We demonstrate its performance through simulation and provide a real data example.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142648133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When multiple historical controls are available, it is necessary to consider the conflicts between current and historical controls and the relationships among historical controls. One of the assumptions concerning the relationships between the parameters of interest of current and historical controls is known as the "Potential biases." Within the "Potential biases" assumption, the differences between the parameters of interest of the current control and of each historical control are defined as "potential bias parameters." We define a class of models called "potential biases model" that encompass several existing methods, including the commensurate prior. The potential bias model incorporates homogeneous historical controls by shrinking the potential bias parameters to zero. In scenarios where multiple historical controls are available, a method that uses a horseshoe prior was proposed. However, various other shrinkage priors are also available. In this study, we propose methods that apply spike-and-slab, Dirichlet-Laplace, and spike-and-slab lasso priors to the potential bias model. We conduct a simulation study and analyze clinical trial examples to compare the performances of the proposed and existing methods. The horseshoe prior and the three other priors make the strongest use of historical controls in the absence of heterogeneous historical controls and reduce the influence of heterogeneous historical controls in the presence of a few historical controls. Among these four priors, the spike-and-slab prior performed the best for heterogeneous historical controls.
{"title":"Potential Bias Models With Bayesian Shrinkage Priors for Dynamic Borrowing of Multiple Historical Control Data.","authors":"Tomohiro Ohigashi, Kazushi Maruo, Takashi Sozu, Ryo Sawamoto, Masahiko Gosho","doi":"10.1002/pst.2453","DOIUrl":"https://doi.org/10.1002/pst.2453","url":null,"abstract":"<p><p>When multiple historical controls are available, it is necessary to consider the conflicts between current and historical controls and the relationships among historical controls. One of the assumptions concerning the relationships between the parameters of interest of current and historical controls is known as the \"Potential biases.\" Within the \"Potential biases\" assumption, the differences between the parameters of interest of the current control and of each historical control are defined as \"potential bias parameters.\" We define a class of models called \"potential biases model\" that encompass several existing methods, including the commensurate prior. The potential bias model incorporates homogeneous historical controls by shrinking the potential bias parameters to zero. In scenarios where multiple historical controls are available, a method that uses a horseshoe prior was proposed. However, various other shrinkage priors are also available. In this study, we propose methods that apply spike-and-slab, Dirichlet-Laplace, and spike-and-slab lasso priors to the potential bias model. We conduct a simulation study and analyze clinical trial examples to compare the performances of the proposed and existing methods. The horseshoe prior and the three other priors make the strongest use of historical controls in the absence of heterogeneous historical controls and reduce the influence of heterogeneous historical controls in the presence of a few historical controls. Among these four priors, the spike-and-slab prior performed the best for heterogeneous historical controls.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142648110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Zhang, Kentaro Takeda, Masato Takeuchi, Kanji Komatsu, Jing Zhu, Yusuke Yamaguchi
The primary purpose of an oncology dose-finding trial for novel anticancer agents has been shifting from determining the maximum tolerated dose to identifying an optimal dose (OD) that is tolerable and therapeutically beneficial for subjects in subsequent clinical trials. In 2022, the FDA Oncology Center of Excellence initiated Project Optimus to reform the paradigm of dose optimization and dose selection in oncology drug development and issued a draft guidance. The guidance suggests that dose-finding trials include randomized dose-response cohorts of multiple doses and incorporate information on pharmacokinetics (PK) in addition to safety and efficacy data to select the OD. Furthermore, PK information could be a quick alternative to efficacy data to predict the minimum efficacious dose and decide the dose assignment. This article proposes a model-based trial design for dose optimization with a randomization scheme based on PK outcomes in oncology. A simulation study shows that the proposed design has advantages compared to the other designs in the percentage of correct OD selection and the average number of patients assigned to OD in various realistic settings.
{"title":"A Model-Based Trial Design With a Randomization Scheme Considering Pharmacokinetics Exposure for Dose Optimization in Oncology.","authors":"Jun Zhang, Kentaro Takeda, Masato Takeuchi, Kanji Komatsu, Jing Zhu, Yusuke Yamaguchi","doi":"10.1002/pst.2454","DOIUrl":"https://doi.org/10.1002/pst.2454","url":null,"abstract":"<p><p>The primary purpose of an oncology dose-finding trial for novel anticancer agents has been shifting from determining the maximum tolerated dose to identifying an optimal dose (OD) that is tolerable and therapeutically beneficial for subjects in subsequent clinical trials. In 2022, the FDA Oncology Center of Excellence initiated Project Optimus to reform the paradigm of dose optimization and dose selection in oncology drug development and issued a draft guidance. The guidance suggests that dose-finding trials include randomized dose-response cohorts of multiple doses and incorporate information on pharmacokinetics (PK) in addition to safety and efficacy data to select the OD. Furthermore, PK information could be a quick alternative to efficacy data to predict the minimum efficacious dose and decide the dose assignment. This article proposes a model-based trial design for dose optimization with a randomization scheme based on PK outcomes in oncology. A simulation study shows that the proposed design has advantages compared to the other designs in the percentage of correct OD selection and the average number of patients assigned to OD in various realistic settings.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142647796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of targeted therapy, immunotherapy, and antibody-drug conjugates (ADCs), there is growing concern over the "more is better" paradigm developed decades ago for chemotherapy, prompting the US Food and Drug Administration (FDA) to initiate Project Optimus to reform dose optimization and selection in oncology drug development. For early-phase oncology trials, given the high variability from sparse data and the rigidity of parametric model specifications, we use Bayesian dynamic models to borrow information across doses with only vague order constraints. Our proposed adaptive design simultaneously incorporates toxicity and efficacy outcomes to select the optimal dose (OD) in Phase I/II clinical trials, utilizing Bayesian model averaging to address the uncertainty of dose-response relationships and enhance the robustness of the design. Additionally, we extend the proposed design to handle delayed toxicity and efficacy outcomes. We conduct extensive simulation studies to evaluate the operating characteristics of the proposed method under various practical scenarios. The results demonstrate that the proposed designs have desirable operating characteristics. A trial example is presented to demonstrate the practical implementation of the proposed designs.
{"title":"A Bayesian Dynamic Model-Based Adaptive Design for Oncology Dose Optimization in Phase I/II Clinical Trials.","authors":"Yingjie Qiu, Mingyue Li","doi":"10.1002/pst.2451","DOIUrl":"https://doi.org/10.1002/pst.2451","url":null,"abstract":"<p><p>With the development of targeted therapy, immunotherapy, and antibody-drug conjugates (ADCs), there is growing concern over the \"more is better\" paradigm developed decades ago for chemotherapy, prompting the US Food and Drug Administration (FDA) to initiate Project Optimus to reform dose optimization and selection in oncology drug development. For early-phase oncology trials, given the high variability from sparse data and the rigidity of parametric model specifications, we use Bayesian dynamic models to borrow information across doses with only vague order constraints. Our proposed adaptive design simultaneously incorporates toxicity and efficacy outcomes to select the optimal dose (OD) in Phase I/II clinical trials, utilizing Bayesian model averaging to address the uncertainty of dose-response relationships and enhance the robustness of the design. Additionally, we extend the proposed design to handle delayed toxicity and efficacy outcomes. We conduct extensive simulation studies to evaluate the operating characteristics of the proposed method under various practical scenarios. The results demonstrate that the proposed designs have desirable operating characteristics. A trial example is presented to demonstrate the practical implementation of the proposed designs.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142625891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-08-04DOI: 10.1002/pst.2411
Thomas Drury, Juan J Abellan, Nicky Best, Ian R White
The estimands framework outlined in ICH E9 (R1) describes the components needed to precisely define the effects to be estimated in clinical trials, which includes how post-baseline 'intercurrent' events (IEs) are to be handled. In late-stage clinical trials, it is common to handle IEs like 'treatment discontinuation' using the treatment policy strategy and target the treatment effect on outcomes regardless of treatment discontinuation. For continuous repeated measures, this type of effect is often estimated using all observed data before and after discontinuation using either a mixed model for repeated measures (MMRM) or multiple imputation (MI) to handle any missing data. In basic form, both these estimation methods ignore treatment discontinuation in the analysis and therefore may be biased if there are differences in patient outcomes after treatment discontinuation compared with patients still assigned to treatment, and missing data being more common for patients who have discontinued treatment. We therefore propose and evaluate a set of MI models that can accommodate differences between outcomes before and after treatment discontinuation. The models are evaluated in the context of planning a Phase 3 trial for a respiratory disease. We show that analyses ignoring treatment discontinuation can introduce substantial bias and can sometimes underestimate variability. We also show that some of the MI models proposed can successfully correct the bias, but inevitably lead to increases in variance. We conclude that some of the proposed MI models are preferable to the traditional analysis ignoring treatment discontinuation, but the precise choice of MI model will likely depend on the trial design, disease of interest and amount of observed and missing data following treatment discontinuation.
ICH E9 (R1)中概述的估计因素框架描述了在临床试验中精确定义估计效应所需的组成部分,其中包括如何处理基线后 "并发 "事件(IEs)。在后期临床试验中,通常会使用治疗策略来处理 "治疗中断 "等 IEs,并针对治疗对结果的影响,而不管治疗中断与否。对于连续重复测量,通常使用重复测量混合模型(MMRM)或多重估算(MI)来处理任何缺失数据,并使用终止治疗前后的所有观察数据来估算这类效应。在基本形式上,这两种估算方法在分析中都忽略了治疗的中断,因此,如果患者中断治疗后的结果与仍在接受治疗的患者相比存在差异,而且中断治疗的患者缺失数据更常见,那么这两种方法就可能存在偏差。因此,我们提出并评估了一组 MI 模型,这些模型可以考虑治疗中断前后的结果差异。我们在规划一项呼吸系统疾病的 3 期试验时对这些模型进行了评估。我们发现,忽略治疗中断的分析会带来很大的偏差,有时还会低估变异性。我们还表明,提出的一些多元智能模型可以成功纠正偏差,但不可避免地会导致变异性增加。我们的结论是,一些建议的 MI 模型优于忽略治疗中断的传统分析,但 MI 模型的准确选择可能取决于试验设计、感兴趣的疾病以及治疗中断后的观察数据和缺失数据的数量。
{"title":"Estimation of Treatment Policy Estimands for Continuous Outcomes Using Off-Treatment Sequential Multiple Imputation.","authors":"Thomas Drury, Juan J Abellan, Nicky Best, Ian R White","doi":"10.1002/pst.2411","DOIUrl":"10.1002/pst.2411","url":null,"abstract":"<p><p>The estimands framework outlined in ICH E9 (R1) describes the components needed to precisely define the effects to be estimated in clinical trials, which includes how post-baseline 'intercurrent' events (IEs) are to be handled. In late-stage clinical trials, it is common to handle IEs like 'treatment discontinuation' using the treatment policy strategy and target the treatment effect on outcomes regardless of treatment discontinuation. For continuous repeated measures, this type of effect is often estimated using all observed data before and after discontinuation using either a mixed model for repeated measures (MMRM) or multiple imputation (MI) to handle any missing data. In basic form, both these estimation methods ignore treatment discontinuation in the analysis and therefore may be biased if there are differences in patient outcomes after treatment discontinuation compared with patients still assigned to treatment, and missing data being more common for patients who have discontinued treatment. We therefore propose and evaluate a set of MI models that can accommodate differences between outcomes before and after treatment discontinuation. The models are evaluated in the context of planning a Phase 3 trial for a respiratory disease. We show that analyses ignoring treatment discontinuation can introduce substantial bias and can sometimes underestimate variability. We also show that some of the MI models proposed can successfully correct the bias, but inevitably lead to increases in variance. We conclude that some of the proposed MI models are preferable to the traditional analysis ignoring treatment discontinuation, but the precise choice of MI model will likely depend on the trial design, disease of interest and amount of observed and missing data following treatment discontinuation.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1144-1155"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602932/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141889907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-05-19DOI: 10.1002/pst.2397
Jialuo Liu, Dong Xi
Difference in proportions is frequently used to measure treatment effect for binary outcomes in randomized clinical trials. The estimation of difference in proportions can be assisted by adjusting for prognostic baseline covariates to enhance precision and bolster statistical power. Standardization or g-computation is a widely used method for covariate adjustment in estimating unconditional difference in proportions, because of its robustness to model misspecification. Various inference methods have been proposed to quantify the uncertainty and confidence intervals based on large-sample theories. However, their performances under small sample sizes and model misspecification have not been comprehensively evaluated. We propose an alternative approach to estimate the unconditional variance of the standardization estimator based on the robust sandwich estimator to further enhance the finite sample performance. Extensive simulations are provided to demonstrate the performances of the proposed method, spanning a wide range of sample sizes, randomization ratios, and model specification. We apply the proposed method in a real data example to illustrate the practical utility.
比例差常用于衡量随机临床试验中二元结局的治疗效果。对预后基线协变量进行调整可提高差异比例估计的精确度并增强统计能力。标准化或 g 计算是估计无条件差异比例时广泛使用的一种协变量调整方法,因为它对模型错误规范具有稳健性。基于大样本理论,人们提出了各种推断方法来量化不确定性和置信区间。然而,这些方法在小样本量和模型误设情况下的表现尚未得到全面评估。我们提出了一种基于稳健三明治估计器估计标准化估计器无条件方差的替代方法,以进一步提高有限样本性能。我们提供了大量模拟,以证明所提方法在样本大小、随机化比率和模型规范等广泛范围内的性能。我们在一个真实数据示例中应用了所提出的方法,以说明其实用性。
{"title":"Covariate adjustment and estimation of difference in proportions in randomized clinical trials.","authors":"Jialuo Liu, Dong Xi","doi":"10.1002/pst.2397","DOIUrl":"10.1002/pst.2397","url":null,"abstract":"<p><p>Difference in proportions is frequently used to measure treatment effect for binary outcomes in randomized clinical trials. The estimation of difference in proportions can be assisted by adjusting for prognostic baseline covariates to enhance precision and bolster statistical power. Standardization or g-computation is a widely used method for covariate adjustment in estimating unconditional difference in proportions, because of its robustness to model misspecification. Various inference methods have been proposed to quantify the uncertainty and confidence intervals based on large-sample theories. However, their performances under small sample sizes and model misspecification have not been comprehensively evaluated. We propose an alternative approach to estimate the unconditional variance of the standardization estimator based on the robust sandwich estimator to further enhance the finite sample performance. Extensive simulations are provided to demonstrate the performances of the proposed method, spanning a wide range of sample sizes, randomization ratios, and model specification. We apply the proposed method in a real data example to illustrate the practical utility.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"884-905"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141065823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-25DOI: 10.1002/pst.2409
G M Hair, T Jemielita, S Mt-Isa, P M Schnell, R Baumgartner
Subgroup analysis may be used to investigate treatment effect heterogeneity among subsets of the study population defined by baseline characteristics. Several methodologies have been proposed in recent years and with these, statistical issues such as multiplicity, complexity, and selection bias have been widely discussed. Some methods adjust for one or more of these issues; however, few of them discuss or consider the stability of the subgroup assignments. We propose exploring the stability of subgroups as a sensitivity analysis step for stratified medicine to assess the robustness of the identified subgroups besides identifying possible factors that may drive this instability. After applying Bayesian credible subgroups, a nonparametric bootstrap can be used to assess stability at subgroup-level and patient-level. Our findings illustrate that when the treatment effect is small or not so evident, patients are more likely to switch to different subgroups (jumpers) across bootstrap resamples. In contrast, when the treatment effect is large or extremely convincing, patients generally remain in the same subgroup. While the proposed subgroup stability method is illustrated through Bayesian credible subgroups method on time-to-event data, this general approach can be used with other subgroup identification methods and endpoints.
{"title":"Investigating Stability in Subgroup Identification for Stratified Medicine.","authors":"G M Hair, T Jemielita, S Mt-Isa, P M Schnell, R Baumgartner","doi":"10.1002/pst.2409","DOIUrl":"10.1002/pst.2409","url":null,"abstract":"<p><p>Subgroup analysis may be used to investigate treatment effect heterogeneity among subsets of the study population defined by baseline characteristics. Several methodologies have been proposed in recent years and with these, statistical issues such as multiplicity, complexity, and selection bias have been widely discussed. Some methods adjust for one or more of these issues; however, few of them discuss or consider the stability of the subgroup assignments. We propose exploring the stability of subgroups as a sensitivity analysis step for stratified medicine to assess the robustness of the identified subgroups besides identifying possible factors that may drive this instability. After applying Bayesian credible subgroups, a nonparametric bootstrap can be used to assess stability at subgroup-level and patient-level. Our findings illustrate that when the treatment effect is small or not so evident, patients are more likely to switch to different subgroups (jumpers) across bootstrap resamples. In contrast, when the treatment effect is large or extremely convincing, patients generally remain in the same subgroup. While the proposed subgroup stability method is illustrated through Bayesian credible subgroups method on time-to-event data, this general approach can be used with other subgroup identification methods and endpoints.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"945-958"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141458676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}