Delayed outcome is common in phase I oncology clinical trials. It causes logistic difficulty, wastes resources, and prolongs the trial duration. This article investigates this issue and proposes the time-to-event 3 + 3 (T3 + 3) design, which utilizes the actual follow-up time for at-risk patients with pending toxicity outcomes. The T3 + 3 design allows continuous accrual without unnecessary trial suspension and is costless and implementable with pretabulated dose decision rules. Besides, the T3 + 3 design uses the isotonic regression to estimate the toxicity rates across dose levels and therefore can accommodate for any targeted toxicity rate for maximum tolerated dose (MTD). It dramatically facilitates the trial preparation and conduct without intensive computation and statistical consultation. The extension to other algorithm-based phase I dose-finding designs (e.g., i3 + 3 design) is also studied. Comprehensive computer simulation studies are conducted to investigate the performance of the T3 + 3 design under various dose-toxicity scenarios. The results confirm that the T3 + 3 design substantially shortens the trial duration compared with the conventional 3 + 3 design and yields much higher accuracy in MTD identification than the rolling six design. In summary, the T3 + 3 design addresses the delayed outcome issue while keeping the desirable features of the 3 + 3 design, such as simplicity, transparency, and costless implementation. It has great potential to accelerate early-phase drug development.
{"title":"T3 + 3: 3 + 3 Design With Delayed Outcomes.","authors":"Jiaying Guo, Mengyi Lu, Isabella Wan, Yumin Wang, Leng Han, Yong Zang","doi":"10.1002/pst.2414","DOIUrl":"10.1002/pst.2414","url":null,"abstract":"<p><p>Delayed outcome is common in phase I oncology clinical trials. It causes logistic difficulty, wastes resources, and prolongs the trial duration. This article investigates this issue and proposes the time-to-event 3 + 3 (T3 + 3) design, which utilizes the actual follow-up time for at-risk patients with pending toxicity outcomes. The T3 + 3 design allows continuous accrual without unnecessary trial suspension and is costless and implementable with pretabulated dose decision rules. Besides, the T3 + 3 design uses the isotonic regression to estimate the toxicity rates across dose levels and therefore can accommodate for any targeted toxicity rate for maximum tolerated dose (MTD). It dramatically facilitates the trial preparation and conduct without intensive computation and statistical consultation. The extension to other algorithm-based phase I dose-finding designs (e.g., i3 + 3 design) is also studied. Comprehensive computer simulation studies are conducted to investigate the performance of the T3 + 3 design under various dose-toxicity scenarios. The results confirm that the T3 + 3 design substantially shortens the trial duration compared with the conventional 3 + 3 design and yields much higher accuracy in MTD identification than the rolling six design. In summary, the T3 + 3 design addresses the delayed outcome issue while keeping the desirable features of the 3 + 3 design, such as simplicity, transparency, and costless implementation. It has great potential to accelerate early-phase drug development.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141458677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Potter, Thomas Bradstreet, Davit Sargsyan, Xiao Tan, Vinicius Bonato, Dingzhou Li, John Liang, Ondrej Libiger, Jocelyn Sendecki, John Stansfield, Kanaka Tatikola, Jialin Xu, Brandy Campbell
In this tutorial we explore the valuable partnership between statisticians and Institutional Animal Care and Use Committees (IACUCs) in the context of animal research, shedding light on the critical role statisticians play in ensuring the ethical and scientifically rigorous use of animals in research. Pharmaceutical statisticians have increasingly become vital members of these committees, contributing expertise in study design, data analysis, and interpretation, and working more generally to facilitate the integration of good statistical practices into experimental procedures. We review the "3Rs" principles (Replacement, Reduction, and Refinement) which are the foundation for the humane use of animals in scientific research, and how statisticians can partner with IACUC to help ensure robust and reproducible research while adhering to the 3Rs principles. We also highlight emerging areas of interest, such as the use of virtual control groups.
{"title":"The partnership between statisticians and the Institutional Animal Care and Use Committee (IACUC).","authors":"David Potter, Thomas Bradstreet, Davit Sargsyan, Xiao Tan, Vinicius Bonato, Dingzhou Li, John Liang, Ondrej Libiger, Jocelyn Sendecki, John Stansfield, Kanaka Tatikola, Jialin Xu, Brandy Campbell","doi":"10.1002/pst.2390","DOIUrl":"https://doi.org/10.1002/pst.2390","url":null,"abstract":"<p><p>In this tutorial we explore the valuable partnership between statisticians and Institutional Animal Care and Use Committees (IACUCs) in the context of animal research, shedding light on the critical role statisticians play in ensuring the ethical and scientifically rigorous use of animals in research. Pharmaceutical statisticians have increasingly become vital members of these committees, contributing expertise in study design, data analysis, and interpretation, and working more generally to facilitate the integration of good statistical practices into experimental procedures. We review the \"3Rs\" principles (Replacement, Reduction, and Refinement) which are the foundation for the humane use of animals in scientific research, and how statisticians can partner with IACUC to help ensure robust and reproducible research while adhering to the 3Rs principles. We also highlight emerging areas of interest, such as the use of virtual control groups.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141301311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vinicius Bonato, Szu-Yu Tang, Matilda Hsieh, Yao Zhang, Shibing Deng
Animal models are used in cancer pre-clinical research to identify drug targets, select compound candidates for clinical trials, determine optimal drug dosages, identify biomarkers, and ensure compound safety. This tutorial aims to provide an overview of study design and data analysis from animal studies, focusing on tumor growth inhibition (TGI) studies used for prioritization of anticancer compounds. Some of the experimental design aspects discussed here include the selection of the appropriate biological models, the choice of endpoints to be used for the assessment of anticancer activity (tumor volumes, tumor growth rates, events, or categorical endpoints), considerations on measurement errors and potential biases related to this type of study, sample size estimation, and discussions on missing data handling. The tutorial also reviews the statistical analyses employed in TGI studies, considering both continuous endpoints collected at single time-point and continuous endpoints collected longitudinally over multiple time-points. Additionally, time-to-event analysis is discussed for studies focusing on event occurrences such as animal deaths or tumor size reaching a certain threshold. Furthermore, for TGI studies involving categorical endpoints, statistical methodology is outlined to compare outcomes among treatment groups effectively. Lastly, this tutorial also discusses analysis for assessing drug combination synergy in TGI studies, which involves combining treatments to enhance overall treatment efficacy. The tutorial also includes R sample scripts to help users to perform relevant data analysis of this topic.
动物模型用于癌症临床前研究,以确定药物靶点、为临床试验选择候选化合物、确定最佳药物剂量、确定生物标志物并确保化合物的安全性。本教程旨在概述动物研究的研究设计和数据分析,重点是用于确定抗癌化合物优先次序的肿瘤生长抑制(TGI)研究。本教程讨论的一些实验设计方面的问题包括:选择适当的生物模型、选择用于评估抗癌活性的终点(肿瘤体积、肿瘤生长率、事件或分类终点)、考虑与这类研究相关的测量误差和潜在偏差、样本量估计以及讨论缺失数据的处理。教程还回顾了 TGI 研究中采用的统计分析方法,既考虑了在单个时间点收集的连续终点,也考虑了在多个时间点纵向收集的连续终点。此外,还讨论了针对事件发生(如动物死亡或肿瘤大小达到某一阈值)的研究进行的时间到事件分析。此外,对于涉及分类终点的 TGI 研究,本教程还概述了统计方法,以便有效比较不同治疗组的结果。最后,本教程还讨论了在 TGI 研究中评估联合用药协同作用的分析方法,这涉及联合用药以提高总体疗效。本教程还包括 R 示例脚本,以帮助用户对该主题进行相关数据分析。
{"title":"Experimental design considerations and statistical analyses in preclinical tumor growth inhibition studies.","authors":"Vinicius Bonato, Szu-Yu Tang, Matilda Hsieh, Yao Zhang, Shibing Deng","doi":"10.1002/pst.2399","DOIUrl":"https://doi.org/10.1002/pst.2399","url":null,"abstract":"<p><p>Animal models are used in cancer pre-clinical research to identify drug targets, select compound candidates for clinical trials, determine optimal drug dosages, identify biomarkers, and ensure compound safety. This tutorial aims to provide an overview of study design and data analysis from animal studies, focusing on tumor growth inhibition (TGI) studies used for prioritization of anticancer compounds. Some of the experimental design aspects discussed here include the selection of the appropriate biological models, the choice of endpoints to be used for the assessment of anticancer activity (tumor volumes, tumor growth rates, events, or categorical endpoints), considerations on measurement errors and potential biases related to this type of study, sample size estimation, and discussions on missing data handling. The tutorial also reviews the statistical analyses employed in TGI studies, considering both continuous endpoints collected at single time-point and continuous endpoints collected longitudinally over multiple time-points. Additionally, time-to-event analysis is discussed for studies focusing on event occurrences such as animal deaths or tumor size reaching a certain threshold. Furthermore, for TGI studies involving categorical endpoints, statistical methodology is outlined to compare outcomes among treatment groups effectively. Lastly, this tutorial also discusses analysis for assessing drug combination synergy in TGI studies, which involves combining treatments to enhance overall treatment efficacy. The tutorial also includes R sample scripts to help users to perform relevant data analysis of this topic.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141301310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In randomized clinical trials that use a long-term efficacy endpoint, the follow-up time necessary to observe the endpoint may be substantial. In such trials, an attractive option is to consider an interim analysis based solely on an early outcome that could be used to expedite the evaluation of treatment's efficacy. Garcia Barrado et al. (Pharm Stat. 2022; 21: 209-219) developed a methodology that allows introducing such an early interim analysis for the case when both the early outcome and the long-term endpoint are normally-distributed, continuous variables. We extend the methodology to any combination of the early-outcome and long-term-endpoint types. As an example, we consider the case of a binary outcome and a time-to-event endpoint. We further evaluate the potential gain in operating characteristics (power, expected trial duration, and expected sample size) of a trial with such an interim analysis in function of the properties of the early outcome as a surrogate for the long-term endpoint.
{"title":"Using an early outcome as the sole source of information of interim decisions regarding treatment effect on a long-term endpoint: The non-Gaussian case.","authors":"Leandro Garcia Barrado, Tomasz Burzykowski","doi":"10.1002/pst.2398","DOIUrl":"https://doi.org/10.1002/pst.2398","url":null,"abstract":"<p><p>In randomized clinical trials that use a long-term efficacy endpoint, the follow-up time necessary to observe the endpoint may be substantial. In such trials, an attractive option is to consider an interim analysis based solely on an early outcome that could be used to expedite the evaluation of treatment's efficacy. Garcia Barrado et al. (Pharm Stat. 2022; 21: 209-219) developed a methodology that allows introducing such an early interim analysis for the case when both the early outcome and the long-term endpoint are normally-distributed, continuous variables. We extend the methodology to any combination of the early-outcome and long-term-endpoint types. As an example, we consider the case of a binary outcome and a time-to-event endpoint. We further evaluate the potential gain in operating characteristics (power, expected trial duration, and expected sample size) of a trial with such an interim analysis in function of the properties of the early outcome as a surrogate for the long-term endpoint.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141261163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a trial design for locating group-specific doses when groups are partially or completely ordered by dose sensitivity. Previous trial designs for partially ordered groups are model-based, whereas the proposed method is model-assisted, providing clinicians with a design that is simpler. The proposed method performs similarly to model-based methods, providing simplicity without losing accuracy. Additionally, to the best of our knowledge, the proposed method is the first paper on dose-finding for partially ordered groups with convergence results. To generalize the proposed method, a framework is introduced that allows partial orders to be transferred to a grid format with a known ordering across rows but an unknown ordering within rows.
{"title":"A model-assisted design for partially or completely ordered groups.","authors":"Connor Celum, Mark Conaway","doi":"10.1002/pst.2396","DOIUrl":"https://doi.org/10.1002/pst.2396","url":null,"abstract":"<p><p>This paper proposes a trial design for locating group-specific doses when groups are partially or completely ordered by dose sensitivity. Previous trial designs for partially ordered groups are model-based, whereas the proposed method is model-assisted, providing clinicians with a design that is simpler. The proposed method performs similarly to model-based methods, providing simplicity without losing accuracy. Additionally, to the best of our knowledge, the proposed method is the first paper on dose-finding for partially ordered groups with convergence results. To generalize the proposed method, a framework is introduced that allows partial orders to be transferred to a grid format with a known ordering across rows but an unknown ordering within rows.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141071600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Difference in proportions is frequently used to measure treatment effect for binary outcomes in randomized clinical trials. The estimation of difference in proportions can be assisted by adjusting for prognostic baseline covariates to enhance precision and bolster statistical power. Standardization or g-computation is a widely used method for covariate adjustment in estimating unconditional difference in proportions, because of its robustness to model misspecification. Various inference methods have been proposed to quantify the uncertainty and confidence intervals based on large-sample theories. However, their performances under small sample sizes and model misspecification have not been comprehensively evaluated. We propose an alternative approach to estimate the unconditional variance of the standardization estimator based on the robust sandwich estimator to further enhance the finite sample performance. Extensive simulations are provided to demonstrate the performances of the proposed method, spanning a wide range of sample sizes, randomization ratios, and model specification. We apply the proposed method in a real data example to illustrate the practical utility.
比例差常用于衡量随机临床试验中二元结局的治疗效果。对预后基线协变量进行调整可提高差异比例估计的精确度并增强统计能力。标准化或 g 计算是估计无条件差异比例时广泛使用的一种协变量调整方法,因为它对模型错误规范具有稳健性。基于大样本理论,人们提出了各种推断方法来量化不确定性和置信区间。然而,这些方法在小样本量和模型误设情况下的表现尚未得到全面评估。我们提出了一种基于稳健三明治估计器估计标准化估计器无条件方差的替代方法,以进一步提高有限样本性能。我们提供了大量模拟,以证明所提方法在样本大小、随机化比率和模型规范等广泛范围内的性能。我们在一个真实数据示例中应用了所提出的方法,以说明其实用性。
{"title":"Covariate adjustment and estimation of difference in proportions in randomized clinical trials.","authors":"Jialuo Liu, Dong Xi","doi":"10.1002/pst.2397","DOIUrl":"https://doi.org/10.1002/pst.2397","url":null,"abstract":"<p><p>Difference in proportions is frequently used to measure treatment effect for binary outcomes in randomized clinical trials. The estimation of difference in proportions can be assisted by adjusting for prognostic baseline covariates to enhance precision and bolster statistical power. Standardization or g-computation is a widely used method for covariate adjustment in estimating unconditional difference in proportions, because of its robustness to model misspecification. Various inference methods have been proposed to quantify the uncertainty and confidence intervals based on large-sample theories. However, their performances under small sample sizes and model misspecification have not been comprehensively evaluated. We propose an alternative approach to estimate the unconditional variance of the standardization estimator based on the robust sandwich estimator to further enhance the finite sample performance. Extensive simulations are provided to demonstrate the performances of the proposed method, spanning a wide range of sample sizes, randomization ratios, and model specification. We apply the proposed method in a real data example to illustrate the practical utility.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141065823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José L Jiménez, Isobel Barrott, Francesca Gasperoni, Dominic Magirr
What can be considered an appropriate statistical method for the primary analysis of a randomized clinical trial (RCT) with a time-to-event endpoint when we anticipate non-proportional hazards owing to a delayed effect? This question has been the subject of much recent debate. The standard approach is a log-rank test and/or a Cox proportional hazards model. Alternative methods have been explored in the statistical literature, such as weighted log-rank tests and tests based on the Restricted Mean Survival Time (RMST). While weighted log-rank tests can achieve high power compared to the standard log-rank test, some choices of weights may lead to type-I error inflation under particular conditions. In addition, they are not linked to a mathematically unambiguous summary measure. Test statistics based on the RMST, on the other hand, allow one to investigate the average difference between two survival curves up to a pre-specified time point -a mathematically unambiguous summary measure. However, by emphasizing differences prior to , such test statistics may not fully capture the benefit of a new treatment in terms of long-term survival. In this article, we introduce a graphical approach for direct comparison of weighted log-rank tests and tests based on the RMST. This new perspective allows a more informed choice of the analysis method, going beyond power and type I error comparison.
当我们预计延迟效应会导致非比例危险时,对于采用时间到事件终点的随机临床试验(RCT)的主要分析,什么才是适当的统计方法?这个问题最近引起了很多争论。标准方法是对数秩检验和/或 Cox 比例危险度模型。统计文献中也探讨了其他方法,如加权对数秩检验和基于限制平均生存时间(RMST)的检验。虽然与标准对数秩检验相比,加权对数秩检验可以获得较高的检验功率,但在特定条件下,某些权重的选择可能会导致I型误差膨胀。此外,加权对数秩检验与数学上明确的总结性指标并无关联。另一方面,基于 RMST 的检验统计允许研究两条生存曲线在预先指定的时间点 τ $$ tau $$ 前的平均差异--这是一个数学上明确的总结性指标。然而,由于强调τ $$ tau $$之前的差异,这种检验统计可能无法完全反映新疗法在长期生存方面的益处。在本文中,我们介绍了一种直接比较加权对数秩检验和基于 RMST 检验的图形方法。从这一新角度出发,我们可以更明智地选择分析方法,而不仅仅局限于功率和 I 型误差的比较。
{"title":"Visualizing hypothesis tests in survival analysis under anticipated delayed effects.","authors":"José L Jiménez, Isobel Barrott, Francesca Gasperoni, Dominic Magirr","doi":"10.1002/pst.2393","DOIUrl":"https://doi.org/10.1002/pst.2393","url":null,"abstract":"<p><p>What can be considered an appropriate statistical method for the primary analysis of a randomized clinical trial (RCT) with a time-to-event endpoint when we anticipate non-proportional hazards owing to a delayed effect? This question has been the subject of much recent debate. The standard approach is a log-rank test and/or a Cox proportional hazards model. Alternative methods have been explored in the statistical literature, such as weighted log-rank tests and tests based on the Restricted Mean Survival Time (RMST). While weighted log-rank tests can achieve high power compared to the standard log-rank test, some choices of weights may lead to type-I error inflation under particular conditions. In addition, they are not linked to a mathematically unambiguous summary measure. Test statistics based on the RMST, on the other hand, allow one to investigate the average difference between two survival curves up to a pre-specified time point <math> <semantics><mrow><mi>τ</mi></mrow> <annotation>$$ tau $$</annotation></semantics> </math> -a mathematically unambiguous summary measure. However, by emphasizing differences prior to <math> <semantics><mrow><mi>τ</mi></mrow> <annotation>$$ tau $$</annotation></semantics> </math> , such test statistics may not fully capture the benefit of a new treatment in terms of long-term survival. In this article, we introduce a graphical approach for direct comparison of weighted log-rank tests and tests based on the RMST. This new perspective allows a more informed choice of the analysis method, going beyond power and type I error comparison.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140859909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2023-12-28DOI: 10.1002/pst.2353
Zixing Wang, Qingyang Zhang, Allen Xue, James Whitmore
With the advent of cancer immunotherapy, some special features including delayed treatment effect, cure rate, diminishing treatment effect and crossing survival are often observed in survival analysis. They violate the proportional hazard model assumption and pose a unique challenge for the conventional trial design and analysis strategies. Many methods like cure rate model have been developed based on mixture model to incorporate some of these features. In this work, we extend the mixture model to deal with multiple non-proportional patterns and develop its geometric average hazard ratio (gAHR) to quantify the treatment effect. We further derive a sample size and power formula based on the non-centrality parameter of the log-rank test and conduct a thorough analysis of the impact of each parameter on performance. Simulation studies showed a clear advantage of our new method over the proportional hazard based calculation across different non-proportional hazard scenarios. Moreover, the mixture modeling of two real trials demonstrates how to use the prior information on the survival distribution among patients with different biomarker and early efficacy results in practice. By comparison with a simulation-based design, the new method provided a more efficient way to compute the power and sample size with high accuracy of estimation. Overall, both theoretical derivation and empirical studies demonstrate the promise of the proposed method in powering future innovative trial designs.
{"title":"Sample size calculation for mixture model based on geometric average hazard ratio and its applications to nonproportional hazard.","authors":"Zixing Wang, Qingyang Zhang, Allen Xue, James Whitmore","doi":"10.1002/pst.2353","DOIUrl":"10.1002/pst.2353","url":null,"abstract":"<p><p>With the advent of cancer immunotherapy, some special features including delayed treatment effect, cure rate, diminishing treatment effect and crossing survival are often observed in survival analysis. They violate the proportional hazard model assumption and pose a unique challenge for the conventional trial design and analysis strategies. Many methods like cure rate model have been developed based on mixture model to incorporate some of these features. In this work, we extend the mixture model to deal with multiple non-proportional patterns and develop its geometric average hazard ratio (gAHR) to quantify the treatment effect. We further derive a sample size and power formula based on the non-centrality parameter of the log-rank test and conduct a thorough analysis of the impact of each parameter on performance. Simulation studies showed a clear advantage of our new method over the proportional hazard based calculation across different non-proportional hazard scenarios. Moreover, the mixture modeling of two real trials demonstrates how to use the prior information on the survival distribution among patients with different biomarker and early efficacy results in practice. By comparison with a simulation-based design, the new method provided a more efficient way to compute the power and sample size with high accuracy of estimation. Overall, both theoretical derivation and empirical studies demonstrate the promise of the proposed method in powering future innovative trial designs.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139049061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The pharmaceutical industry is plagued with long, costly development and high risk. Therefore, a company's effective management and optimisation of a portfolio of projects is critical for success. Project metrics such as the probability of success enable modelling of a company's pipeline accounting for the high uncertainty inherent within the industry. Making portfolio decisions inherently involves managing risk, and statisticians are ideally positioned to champion not only the derivation of metrics for individual projects, but also advocate decision-making at a broader portfolio level. This article aims to examine the existing different portfolio decision-making approaches and to suggest opportunities for statisticians to add value in terms of introducing probabilistic thinking, quantitative decision-making, and increasingly advanced methodologies.
{"title":"Going beyond probability of success: Opportunities for statisticians to influence quantitative decision-making at the portfolio level.","authors":"Stig-Johan Wiklund, Katharine Thorn, Heiko Götte, Kimberley Hacquoil, Gaëlle Saint-Hilary, Alex Carlton","doi":"10.1002/pst.2361","DOIUrl":"10.1002/pst.2361","url":null,"abstract":"<p><p>The pharmaceutical industry is plagued with long, costly development and high risk. Therefore, a company's effective management and optimisation of a portfolio of projects is critical for success. Project metrics such as the probability of success enable modelling of a company's pipeline accounting for the high uncertainty inherent within the industry. Making portfolio decisions inherently involves managing risk, and statisticians are ideally positioned to champion not only the derivation of metrics for individual projects, but also advocate decision-making at a broader portfolio level. This article aims to examine the existing different portfolio decision-making approaches and to suggest opportunities for statisticians to add value in terms of introducing probabilistic thinking, quantitative decision-making, and increasingly advanced methodologies.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139425264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-01-11DOI: 10.1002/pst.2359
Jinglin Zhong, David Petullo
Since the publication of ICH E9 (R1), "Addendum to statistical principles for clinical trials: on choosing appropriate estimands and defining sensitivity analyses in clinical trials," there has been a lot of debate about the hypothetical strategy for handling intercurrent events. Arguments against the hypothetical strategy are twofold: (1) the clinical question has limited clinical/regulatory interest; (2) the estimation may need strong statistical assumptions. In this article, we provide an example of a hypothetical strategy handling use of rescue medications in the acute pain setting. We argue that the treatment effect of a drug that is attributable to the treatment alone is the clinical question of interest and is important to regulators. The hypothetical strategy is important when developing non-opioid treatment as it estimates the treatment effect due to treatment during the pre-specified evaluation period whereas the treatment policy strategy does not. Two widely acceptable and non-controversial clinical inputs are required to construct a reasonable estimator. More importantly, this estimator does not rely on additional strong statistical assumptions and is considered reasonable for regulatory decision making. In this article, we point out examples where estimators for a hypothetical strategy can be constructed without any strong additional statistical assumptions besides acceptable clinical inputs. We also showcase a new way to obtain estimation based on disease specific clinical knowledge instead of strong statistical assumptions. In the example presented, we clearly demonstrate the advantages of the hypothetical strategy compared to alternative strategies including the treatment policy strategy and a composite variable strategy.
自 ICH E9 (R1) "临床试验统计原则增编:关于在临床试验中选择适当的估算对象和定义敏感性分析 "发布以来,关于处理并发症的假设策略一直争论不休。反对假设策略的观点有两个方面:(1)临床问题的临床/监管意义有限;(2)估计可能需要很强的统计假设。在本文中,我们举例说明了在急性疼痛情况下使用抢救药物的假设策略。我们认为,药物的治疗效果仅归因于治疗本身,这是临床关心的问题,对监管者也很重要。在开发非阿片类药物治疗时,假设策略非常重要,因为它可以估算出在预先指定的评估期内因治疗而产生的治疗效果,而治疗政策策略则不然。要构建一个合理的估算器,需要两个广为接受且无争议的临床输入。更重要的是,这种估算方法不依赖于额外的强统计学假设,被认为是监管决策的合理方法。在本文中,我们将举例说明,除了可接受的临床输入外,无需任何额外的强统计学假设,就能构建假设策略的估算器。我们还展示了一种基于特定疾病临床知识而非强统计假设来获得估计值的新方法。在所介绍的例子中,我们清楚地展示了假设策略与其他策略(包括治疗政策策略和复合变量策略)相比的优势。
{"title":"Application of hypothetical strategies in acute pain.","authors":"Jinglin Zhong, David Petullo","doi":"10.1002/pst.2359","DOIUrl":"10.1002/pst.2359","url":null,"abstract":"<p><p>Since the publication of ICH E9 (R1), \"Addendum to statistical principles for clinical trials: on choosing appropriate estimands and defining sensitivity analyses in clinical trials,\" there has been a lot of debate about the hypothetical strategy for handling intercurrent events. Arguments against the hypothetical strategy are twofold: (1) the clinical question has limited clinical/regulatory interest; (2) the estimation may need strong statistical assumptions. In this article, we provide an example of a hypothetical strategy handling use of rescue medications in the acute pain setting. We argue that the treatment effect of a drug that is attributable to the treatment alone is the clinical question of interest and is important to regulators. The hypothetical strategy is important when developing non-opioid treatment as it estimates the treatment effect due to treatment during the pre-specified evaluation period whereas the treatment policy strategy does not. Two widely acceptable and non-controversial clinical inputs are required to construct a reasonable estimator. More importantly, this estimator does not rely on additional strong statistical assumptions and is considered reasonable for regulatory decision making. In this article, we point out examples where estimators for a hypothetical strategy can be constructed without any strong additional statistical assumptions besides acceptable clinical inputs. We also showcase a new way to obtain estimation based on disease specific clinical knowledge instead of strong statistical assumptions. In the example presented, we clearly demonstrate the advantages of the hypothetical strategy compared to alternative strategies including the treatment policy strategy and a composite variable strategy.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139425263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}