Yulia Dyachkova, Cornelia Dunger-Baldauf, Nathalie Barbier, Jenny Devenport, Stefan Franzén, Gbenga Kazeem, Thomas Künzel, Pierre Mancini, Giacomo Mordenti, Knut Richert, Antonia Ridolfi, Daniel Saure
Single-arm trials (SATs), while not preferred, remain in use throughout the drug development cycle. They may be accepted by regulators in particular contexts (e.g., in oncology or rare diseases) when the potential effects of new treatments are very large and placebo treatment is unethical. However, in the postregulatory space, SATs are common, and perhaps even more poorly suited to address the questions of interest. In this manuscript, we review regulatory and HTA positions on SATs; challenges posed by SATs to address research questions beyond regulators, evolving statistical methods to provide context for SATs, case studies where SATs could and could not address questions of interest, and communication strategies to influence decision making and optimize study design to address evidence needs.
单臂试验(SAT)虽然不是首选,但在整个药物开发周期中仍在使用。在特定情况下(如肿瘤或罕见病),如果新疗法的潜在效果非常大,而安慰剂治疗又不道德,监管机构可能会接受单臂试验。然而,在后监管领域,SATs 很常见,也许更不适合解决人们感兴趣的问题。在本手稿中,我们回顾了监管机构和 HTA 对 SAT 的立场;SAT 在解决监管机构之外的研究问题时所面临的挑战;为 SAT 提供背景的不断发展的统计方法;SAT 能够和不能解决相关问题的案例研究;以及影响决策和优化研究设计以满足证据需求的沟通策略。
{"title":"Do You Want to Stay Single? Considerations on Single-Arm Trials in Drug Development and the Postregulatory Space.","authors":"Yulia Dyachkova, Cornelia Dunger-Baldauf, Nathalie Barbier, Jenny Devenport, Stefan Franzén, Gbenga Kazeem, Thomas Künzel, Pierre Mancini, Giacomo Mordenti, Knut Richert, Antonia Ridolfi, Daniel Saure","doi":"10.1002/pst.2412","DOIUrl":"https://doi.org/10.1002/pst.2412","url":null,"abstract":"<p><p>Single-arm trials (SATs), while not preferred, remain in use throughout the drug development cycle. They may be accepted by regulators in particular contexts (e.g., in oncology or rare diseases) when the potential effects of new treatments are very large and placebo treatment is unethical. However, in the postregulatory space, SATs are common, and perhaps even more poorly suited to address the questions of interest. In this manuscript, we review regulatory and HTA positions on SATs; challenges posed by SATs to address research questions beyond regulators, evolving statistical methods to provide context for SATs, case studies where SATs could and could not address questions of interest, and communication strategies to influence decision making and optimize study design to address evidence needs.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141458675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Delayed outcome is common in phase I oncology clinical trials. It causes logistic difficulty, wastes resources, and prolongs the trial duration. This article investigates this issue and proposes the time-to-event 3 + 3 (T3 + 3) design, which utilizes the actual follow-up time for at-risk patients with pending toxicity outcomes. The T3 + 3 design allows continuous accrual without unnecessary trial suspension and is costless and implementable with pretabulated dose decision rules. Besides, the T3 + 3 design uses the isotonic regression to estimate the toxicity rates across dose levels and therefore can accommodate for any targeted toxicity rate for maximum tolerated dose (MTD). It dramatically facilitates the trial preparation and conduct without intensive computation and statistical consultation. The extension to other algorithm-based phase I dose-finding designs (e.g., i3 + 3 design) is also studied. Comprehensive computer simulation studies are conducted to investigate the performance of the T3 + 3 design under various dose-toxicity scenarios. The results confirm that the T3 + 3 design substantially shortens the trial duration compared with the conventional 3 + 3 design and yields much higher accuracy in MTD identification than the rolling six design. In summary, the T3 + 3 design addresses the delayed outcome issue while keeping the desirable features of the 3 + 3 design, such as simplicity, transparency, and costless implementation. It has great potential to accelerate early-phase drug development.
{"title":"T3 + 3: 3 + 3 Design With Delayed Outcomes.","authors":"Jiaying Guo, Mengyi Lu, Isabella Wan, Yumin Wang, Leng Han, Yong Zang","doi":"10.1002/pst.2414","DOIUrl":"10.1002/pst.2414","url":null,"abstract":"<p><p>Delayed outcome is common in phase I oncology clinical trials. It causes logistic difficulty, wastes resources, and prolongs the trial duration. This article investigates this issue and proposes the time-to-event 3 + 3 (T3 + 3) design, which utilizes the actual follow-up time for at-risk patients with pending toxicity outcomes. The T3 + 3 design allows continuous accrual without unnecessary trial suspension and is costless and implementable with pretabulated dose decision rules. Besides, the T3 + 3 design uses the isotonic regression to estimate the toxicity rates across dose levels and therefore can accommodate for any targeted toxicity rate for maximum tolerated dose (MTD). It dramatically facilitates the trial preparation and conduct without intensive computation and statistical consultation. The extension to other algorithm-based phase I dose-finding designs (e.g., i3 + 3 design) is also studied. Comprehensive computer simulation studies are conducted to investigate the performance of the T3 + 3 design under various dose-toxicity scenarios. The results confirm that the T3 + 3 design substantially shortens the trial duration compared with the conventional 3 + 3 design and yields much higher accuracy in MTD identification than the rolling six design. In summary, the T3 + 3 design addresses the delayed outcome issue while keeping the desirable features of the 3 + 3 design, such as simplicity, transparency, and costless implementation. It has great potential to accelerate early-phase drug development.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141458677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recombinant adeno-associated virus (AAV) has become a popular platform for many gene therapy applications. The strength of AAV-based products is a critical quality attribute that affects the efficacy of the drug and is measured as the concentration of vector genomes, or physical titer. Because the dosing of patients is based on the titer measurement, it is critical for manufacturers to ensure that the measured titer of the drug product is close to the actual concentration of the batch. Historically, dosing calculations have been performed using the measured titer, which is reported on the drug product label. However, due to recent regulatory guidance, sponsors are now expected to label the drug product with nominal or "target" titer. This new expectation for gene therapy products can pose a challenge in the presence of process and analytical variability. In particular, the manufacturer must decide if a dilution of the drug substance is warranted at the drug product stage to bring the strength in line with the nominal value. In this paper, we present two straightforward statistical methods to aid the manufacturer in the dilution decision. These approaches use the understanding of process and analytical variability to compute probabilities of achieving the desired drug product titer. We also provide an approach for determining an optimal assay replication strategy for achieving the desired probability of meeting drug product release specifications.
{"title":"To Dilute or Not to Dilute: Nominal Titer Dosing for Genetic Medicines.","authors":"Paul Faya, Tianhui Zhang","doi":"10.1002/pst.2406","DOIUrl":"https://doi.org/10.1002/pst.2406","url":null,"abstract":"<p><p>Recombinant adeno-associated virus (AAV) has become a popular platform for many gene therapy applications. The strength of AAV-based products is a critical quality attribute that affects the efficacy of the drug and is measured as the concentration of vector genomes, or physical titer. Because the dosing of patients is based on the titer measurement, it is critical for manufacturers to ensure that the measured titer of the drug product is close to the actual concentration of the batch. Historically, dosing calculations have been performed using the measured titer, which is reported on the drug product label. However, due to recent regulatory guidance, sponsors are now expected to label the drug product with nominal or \"target\" titer. This new expectation for gene therapy products can pose a challenge in the presence of process and analytical variability. In particular, the manufacturer must decide if a dilution of the drug substance is warranted at the drug product stage to bring the strength in line with the nominal value. In this paper, we present two straightforward statistical methods to aid the manufacturer in the dilution decision. These approaches use the understanding of process and analytical variability to compute probabilities of achieving the desired drug product titer. We also provide an approach for determining an optimal assay replication strategy for achieving the desired probability of meeting drug product release specifications.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141458678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Potter, Thomas Bradstreet, Davit Sargsyan, Xiao Tan, Vinicius Bonato, Dingzhou Li, John Liang, Ondrej Libiger, Jocelyn Sendecki, John Stansfield, Kanaka Tatikola, Jialin Xu, Brandy Campbell
In this tutorial we explore the valuable partnership between statisticians and Institutional Animal Care and Use Committees (IACUCs) in the context of animal research, shedding light on the critical role statisticians play in ensuring the ethical and scientifically rigorous use of animals in research. Pharmaceutical statisticians have increasingly become vital members of these committees, contributing expertise in study design, data analysis, and interpretation, and working more generally to facilitate the integration of good statistical practices into experimental procedures. We review the "3Rs" principles (Replacement, Reduction, and Refinement) which are the foundation for the humane use of animals in scientific research, and how statisticians can partner with IACUC to help ensure robust and reproducible research while adhering to the 3Rs principles. We also highlight emerging areas of interest, such as the use of virtual control groups.
{"title":"The partnership between statisticians and the Institutional Animal Care and Use Committee (IACUC).","authors":"David Potter, Thomas Bradstreet, Davit Sargsyan, Xiao Tan, Vinicius Bonato, Dingzhou Li, John Liang, Ondrej Libiger, Jocelyn Sendecki, John Stansfield, Kanaka Tatikola, Jialin Xu, Brandy Campbell","doi":"10.1002/pst.2390","DOIUrl":"https://doi.org/10.1002/pst.2390","url":null,"abstract":"<p><p>In this tutorial we explore the valuable partnership between statisticians and Institutional Animal Care and Use Committees (IACUCs) in the context of animal research, shedding light on the critical role statisticians play in ensuring the ethical and scientifically rigorous use of animals in research. Pharmaceutical statisticians have increasingly become vital members of these committees, contributing expertise in study design, data analysis, and interpretation, and working more generally to facilitate the integration of good statistical practices into experimental procedures. We review the \"3Rs\" principles (Replacement, Reduction, and Refinement) which are the foundation for the humane use of animals in scientific research, and how statisticians can partner with IACUC to help ensure robust and reproducible research while adhering to the 3Rs principles. We also highlight emerging areas of interest, such as the use of virtual control groups.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141301311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vinicius Bonato, Szu-Yu Tang, Matilda Hsieh, Yao Zhang, Shibing Deng
Animal models are used in cancer pre-clinical research to identify drug targets, select compound candidates for clinical trials, determine optimal drug dosages, identify biomarkers, and ensure compound safety. This tutorial aims to provide an overview of study design and data analysis from animal studies, focusing on tumor growth inhibition (TGI) studies used for prioritization of anticancer compounds. Some of the experimental design aspects discussed here include the selection of the appropriate biological models, the choice of endpoints to be used for the assessment of anticancer activity (tumor volumes, tumor growth rates, events, or categorical endpoints), considerations on measurement errors and potential biases related to this type of study, sample size estimation, and discussions on missing data handling. The tutorial also reviews the statistical analyses employed in TGI studies, considering both continuous endpoints collected at single time-point and continuous endpoints collected longitudinally over multiple time-points. Additionally, time-to-event analysis is discussed for studies focusing on event occurrences such as animal deaths or tumor size reaching a certain threshold. Furthermore, for TGI studies involving categorical endpoints, statistical methodology is outlined to compare outcomes among treatment groups effectively. Lastly, this tutorial also discusses analysis for assessing drug combination synergy in TGI studies, which involves combining treatments to enhance overall treatment efficacy. The tutorial also includes R sample scripts to help users to perform relevant data analysis of this topic.
动物模型用于癌症临床前研究,以确定药物靶点、为临床试验选择候选化合物、确定最佳药物剂量、确定生物标志物并确保化合物的安全性。本教程旨在概述动物研究的研究设计和数据分析,重点是用于确定抗癌化合物优先次序的肿瘤生长抑制(TGI)研究。本教程讨论的一些实验设计方面的问题包括:选择适当的生物模型、选择用于评估抗癌活性的终点(肿瘤体积、肿瘤生长率、事件或分类终点)、考虑与这类研究相关的测量误差和潜在偏差、样本量估计以及讨论缺失数据的处理。教程还回顾了 TGI 研究中采用的统计分析方法,既考虑了在单个时间点收集的连续终点,也考虑了在多个时间点纵向收集的连续终点。此外,还讨论了针对事件发生(如动物死亡或肿瘤大小达到某一阈值)的研究进行的时间到事件分析。此外,对于涉及分类终点的 TGI 研究,本教程还概述了统计方法,以便有效比较不同治疗组的结果。最后,本教程还讨论了在 TGI 研究中评估联合用药协同作用的分析方法,这涉及联合用药以提高总体疗效。本教程还包括 R 示例脚本,以帮助用户对该主题进行相关数据分析。
{"title":"Experimental design considerations and statistical analyses in preclinical tumor growth inhibition studies.","authors":"Vinicius Bonato, Szu-Yu Tang, Matilda Hsieh, Yao Zhang, Shibing Deng","doi":"10.1002/pst.2399","DOIUrl":"https://doi.org/10.1002/pst.2399","url":null,"abstract":"<p><p>Animal models are used in cancer pre-clinical research to identify drug targets, select compound candidates for clinical trials, determine optimal drug dosages, identify biomarkers, and ensure compound safety. This tutorial aims to provide an overview of study design and data analysis from animal studies, focusing on tumor growth inhibition (TGI) studies used for prioritization of anticancer compounds. Some of the experimental design aspects discussed here include the selection of the appropriate biological models, the choice of endpoints to be used for the assessment of anticancer activity (tumor volumes, tumor growth rates, events, or categorical endpoints), considerations on measurement errors and potential biases related to this type of study, sample size estimation, and discussions on missing data handling. The tutorial also reviews the statistical analyses employed in TGI studies, considering both continuous endpoints collected at single time-point and continuous endpoints collected longitudinally over multiple time-points. Additionally, time-to-event analysis is discussed for studies focusing on event occurrences such as animal deaths or tumor size reaching a certain threshold. Furthermore, for TGI studies involving categorical endpoints, statistical methodology is outlined to compare outcomes among treatment groups effectively. Lastly, this tutorial also discusses analysis for assessing drug combination synergy in TGI studies, which involves combining treatments to enhance overall treatment efficacy. The tutorial also includes R sample scripts to help users to perform relevant data analysis of this topic.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141301310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In randomized clinical trials that use a long-term efficacy endpoint, the follow-up time necessary to observe the endpoint may be substantial. In such trials, an attractive option is to consider an interim analysis based solely on an early outcome that could be used to expedite the evaluation of treatment's efficacy. Garcia Barrado et al. (Pharm Stat. 2022; 21: 209-219) developed a methodology that allows introducing such an early interim analysis for the case when both the early outcome and the long-term endpoint are normally-distributed, continuous variables. We extend the methodology to any combination of the early-outcome and long-term-endpoint types. As an example, we consider the case of a binary outcome and a time-to-event endpoint. We further evaluate the potential gain in operating characteristics (power, expected trial duration, and expected sample size) of a trial with such an interim analysis in function of the properties of the early outcome as a surrogate for the long-term endpoint.
{"title":"Using an early outcome as the sole source of information of interim decisions regarding treatment effect on a long-term endpoint: The non-Gaussian case.","authors":"Leandro Garcia Barrado, Tomasz Burzykowski","doi":"10.1002/pst.2398","DOIUrl":"https://doi.org/10.1002/pst.2398","url":null,"abstract":"<p><p>In randomized clinical trials that use a long-term efficacy endpoint, the follow-up time necessary to observe the endpoint may be substantial. In such trials, an attractive option is to consider an interim analysis based solely on an early outcome that could be used to expedite the evaluation of treatment's efficacy. Garcia Barrado et al. (Pharm Stat. 2022; 21: 209-219) developed a methodology that allows introducing such an early interim analysis for the case when both the early outcome and the long-term endpoint are normally-distributed, continuous variables. We extend the methodology to any combination of the early-outcome and long-term-endpoint types. As an example, we consider the case of a binary outcome and a time-to-event endpoint. We further evaluate the potential gain in operating characteristics (power, expected trial duration, and expected sample size) of a trial with such an interim analysis in function of the properties of the early outcome as a surrogate for the long-term endpoint.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141261163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a trial design for locating group-specific doses when groups are partially or completely ordered by dose sensitivity. Previous trial designs for partially ordered groups are model-based, whereas the proposed method is model-assisted, providing clinicians with a design that is simpler. The proposed method performs similarly to model-based methods, providing simplicity without losing accuracy. Additionally, to the best of our knowledge, the proposed method is the first paper on dose-finding for partially ordered groups with convergence results. To generalize the proposed method, a framework is introduced that allows partial orders to be transferred to a grid format with a known ordering across rows but an unknown ordering within rows.
{"title":"A model-assisted design for partially or completely ordered groups.","authors":"Connor Celum, Mark Conaway","doi":"10.1002/pst.2396","DOIUrl":"https://doi.org/10.1002/pst.2396","url":null,"abstract":"<p><p>This paper proposes a trial design for locating group-specific doses when groups are partially or completely ordered by dose sensitivity. Previous trial designs for partially ordered groups are model-based, whereas the proposed method is model-assisted, providing clinicians with a design that is simpler. The proposed method performs similarly to model-based methods, providing simplicity without losing accuracy. Additionally, to the best of our knowledge, the proposed method is the first paper on dose-finding for partially ordered groups with convergence results. To generalize the proposed method, a framework is introduced that allows partial orders to be transferred to a grid format with a known ordering across rows but an unknown ordering within rows.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141071600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Difference in proportions is frequently used to measure treatment effect for binary outcomes in randomized clinical trials. The estimation of difference in proportions can be assisted by adjusting for prognostic baseline covariates to enhance precision and bolster statistical power. Standardization or g-computation is a widely used method for covariate adjustment in estimating unconditional difference in proportions, because of its robustness to model misspecification. Various inference methods have been proposed to quantify the uncertainty and confidence intervals based on large-sample theories. However, their performances under small sample sizes and model misspecification have not been comprehensively evaluated. We propose an alternative approach to estimate the unconditional variance of the standardization estimator based on the robust sandwich estimator to further enhance the finite sample performance. Extensive simulations are provided to demonstrate the performances of the proposed method, spanning a wide range of sample sizes, randomization ratios, and model specification. We apply the proposed method in a real data example to illustrate the practical utility.
比例差常用于衡量随机临床试验中二元结局的治疗效果。对预后基线协变量进行调整可提高差异比例估计的精确度并增强统计能力。标准化或 g 计算是估计无条件差异比例时广泛使用的一种协变量调整方法,因为它对模型错误规范具有稳健性。基于大样本理论,人们提出了各种推断方法来量化不确定性和置信区间。然而,这些方法在小样本量和模型误设情况下的表现尚未得到全面评估。我们提出了一种基于稳健三明治估计器估计标准化估计器无条件方差的替代方法,以进一步提高有限样本性能。我们提供了大量模拟,以证明所提方法在样本大小、随机化比率和模型规范等广泛范围内的性能。我们在一个真实数据示例中应用了所提出的方法,以说明其实用性。
{"title":"Covariate adjustment and estimation of difference in proportions in randomized clinical trials.","authors":"Jialuo Liu, Dong Xi","doi":"10.1002/pst.2397","DOIUrl":"https://doi.org/10.1002/pst.2397","url":null,"abstract":"<p><p>Difference in proportions is frequently used to measure treatment effect for binary outcomes in randomized clinical trials. The estimation of difference in proportions can be assisted by adjusting for prognostic baseline covariates to enhance precision and bolster statistical power. Standardization or g-computation is a widely used method for covariate adjustment in estimating unconditional difference in proportions, because of its robustness to model misspecification. Various inference methods have been proposed to quantify the uncertainty and confidence intervals based on large-sample theories. However, their performances under small sample sizes and model misspecification have not been comprehensively evaluated. We propose an alternative approach to estimate the unconditional variance of the standardization estimator based on the robust sandwich estimator to further enhance the finite sample performance. Extensive simulations are provided to demonstrate the performances of the proposed method, spanning a wide range of sample sizes, randomization ratios, and model specification. We apply the proposed method in a real data example to illustrate the practical utility.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141065823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José L Jiménez, Isobel Barrott, Francesca Gasperoni, Dominic Magirr
What can be considered an appropriate statistical method for the primary analysis of a randomized clinical trial (RCT) with a time-to-event endpoint when we anticipate non-proportional hazards owing to a delayed effect? This question has been the subject of much recent debate. The standard approach is a log-rank test and/or a Cox proportional hazards model. Alternative methods have been explored in the statistical literature, such as weighted log-rank tests and tests based on the Restricted Mean Survival Time (RMST). While weighted log-rank tests can achieve high power compared to the standard log-rank test, some choices of weights may lead to type-I error inflation under particular conditions. In addition, they are not linked to a mathematically unambiguous summary measure. Test statistics based on the RMST, on the other hand, allow one to investigate the average difference between two survival curves up to a pre-specified time point -a mathematically unambiguous summary measure. However, by emphasizing differences prior to , such test statistics may not fully capture the benefit of a new treatment in terms of long-term survival. In this article, we introduce a graphical approach for direct comparison of weighted log-rank tests and tests based on the RMST. This new perspective allows a more informed choice of the analysis method, going beyond power and type I error comparison.
当我们预计延迟效应会导致非比例危险时,对于采用时间到事件终点的随机临床试验(RCT)的主要分析,什么才是适当的统计方法?这个问题最近引起了很多争论。标准方法是对数秩检验和/或 Cox 比例危险度模型。统计文献中也探讨了其他方法,如加权对数秩检验和基于限制平均生存时间(RMST)的检验。虽然与标准对数秩检验相比,加权对数秩检验可以获得较高的检验功率,但在特定条件下,某些权重的选择可能会导致I型误差膨胀。此外,加权对数秩检验与数学上明确的总结性指标并无关联。另一方面,基于 RMST 的检验统计允许研究两条生存曲线在预先指定的时间点 τ $$ tau $$ 前的平均差异--这是一个数学上明确的总结性指标。然而,由于强调τ $$ tau $$之前的差异,这种检验统计可能无法完全反映新疗法在长期生存方面的益处。在本文中,我们介绍了一种直接比较加权对数秩检验和基于 RMST 检验的图形方法。从这一新角度出发,我们可以更明智地选择分析方法,而不仅仅局限于功率和 I 型误差的比较。
{"title":"Visualizing hypothesis tests in survival analysis under anticipated delayed effects.","authors":"José L Jiménez, Isobel Barrott, Francesca Gasperoni, Dominic Magirr","doi":"10.1002/pst.2393","DOIUrl":"https://doi.org/10.1002/pst.2393","url":null,"abstract":"<p><p>What can be considered an appropriate statistical method for the primary analysis of a randomized clinical trial (RCT) with a time-to-event endpoint when we anticipate non-proportional hazards owing to a delayed effect? This question has been the subject of much recent debate. The standard approach is a log-rank test and/or a Cox proportional hazards model. Alternative methods have been explored in the statistical literature, such as weighted log-rank tests and tests based on the Restricted Mean Survival Time (RMST). While weighted log-rank tests can achieve high power compared to the standard log-rank test, some choices of weights may lead to type-I error inflation under particular conditions. In addition, they are not linked to a mathematically unambiguous summary measure. Test statistics based on the RMST, on the other hand, allow one to investigate the average difference between two survival curves up to a pre-specified time point <math> <semantics><mrow><mi>τ</mi></mrow> <annotation>$$ tau $$</annotation></semantics> </math> -a mathematically unambiguous summary measure. However, by emphasizing differences prior to <math> <semantics><mrow><mi>τ</mi></mrow> <annotation>$$ tau $$</annotation></semantics> </math> , such test statistics may not fully capture the benefit of a new treatment in terms of long-term survival. In this article, we introduce a graphical approach for direct comparison of weighted log-rank tests and tests based on the RMST. This new perspective allows a more informed choice of the analysis method, going beyond power and type I error comparison.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140859909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2023-12-28DOI: 10.1002/pst.2353
Zixing Wang, Qingyang Zhang, Allen Xue, James Whitmore
With the advent of cancer immunotherapy, some special features including delayed treatment effect, cure rate, diminishing treatment effect and crossing survival are often observed in survival analysis. They violate the proportional hazard model assumption and pose a unique challenge for the conventional trial design and analysis strategies. Many methods like cure rate model have been developed based on mixture model to incorporate some of these features. In this work, we extend the mixture model to deal with multiple non-proportional patterns and develop its geometric average hazard ratio (gAHR) to quantify the treatment effect. We further derive a sample size and power formula based on the non-centrality parameter of the log-rank test and conduct a thorough analysis of the impact of each parameter on performance. Simulation studies showed a clear advantage of our new method over the proportional hazard based calculation across different non-proportional hazard scenarios. Moreover, the mixture modeling of two real trials demonstrates how to use the prior information on the survival distribution among patients with different biomarker and early efficacy results in practice. By comparison with a simulation-based design, the new method provided a more efficient way to compute the power and sample size with high accuracy of estimation. Overall, both theoretical derivation and empirical studies demonstrate the promise of the proposed method in powering future innovative trial designs.
{"title":"Sample size calculation for mixture model based on geometric average hazard ratio and its applications to nonproportional hazard.","authors":"Zixing Wang, Qingyang Zhang, Allen Xue, James Whitmore","doi":"10.1002/pst.2353","DOIUrl":"10.1002/pst.2353","url":null,"abstract":"<p><p>With the advent of cancer immunotherapy, some special features including delayed treatment effect, cure rate, diminishing treatment effect and crossing survival are often observed in survival analysis. They violate the proportional hazard model assumption and pose a unique challenge for the conventional trial design and analysis strategies. Many methods like cure rate model have been developed based on mixture model to incorporate some of these features. In this work, we extend the mixture model to deal with multiple non-proportional patterns and develop its geometric average hazard ratio (gAHR) to quantify the treatment effect. We further derive a sample size and power formula based on the non-centrality parameter of the log-rank test and conduct a thorough analysis of the impact of each parameter on performance. Simulation studies showed a clear advantage of our new method over the proportional hazard based calculation across different non-proportional hazard scenarios. Moreover, the mixture modeling of two real trials demonstrates how to use the prior information on the survival distribution among patients with different biomarker and early efficacy results in practice. By comparison with a simulation-based design, the new method provided a more efficient way to compute the power and sample size with high accuracy of estimation. Overall, both theoretical derivation and empirical studies demonstrate the promise of the proposed method in powering future innovative trial designs.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"325-338"},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139049061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}