Pub Date : 2025-01-01Epub Date: 2024-07-08DOI: 10.1002/pst.2408
Hang Li, Tomasz M Witkos, Scott Umlauf, Christopher Thompson
During the drug development process, testing potency plays an important role in the quality assessment required for the manufacturing and marketing of biologics. Due to multiple operational and biological factors, higher variability is usually observed in bioassays compared with physicochemical methods. In this paper, we discuss different sources of bioassay variability and how this variability can be statistically estimated. In addition, we propose an algorithm to estimate the variability of reportable results associated with different numbers of runs and their corresponding OOS rates under a given specification. Numerical experiments are conducted on multiple assay formats to elucidate the empirical distribution of bioassay variability.
{"title":"Potency Assay Variability Estimation in Practice.","authors":"Hang Li, Tomasz M Witkos, Scott Umlauf, Christopher Thompson","doi":"10.1002/pst.2408","DOIUrl":"10.1002/pst.2408","url":null,"abstract":"<p><p>During the drug development process, testing potency plays an important role in the quality assessment required for the manufacturing and marketing of biologics. Due to multiple operational and biological factors, higher variability is usually observed in bioassays compared with physicochemical methods. In this paper, we discuss different sources of bioassay variability and how this variability can be statistically estimated. In addition, we propose an algorithm to estimate the variability of reportable results associated with different numbers of runs and their corresponding OOS rates under a given specification. Numerical experiments are conducted on multiple assay formats to elucidate the empirical distribution of bioassay variability.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2408"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11788244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141559471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-05DOI: 10.1002/pst.2426
Lynne B Hare, Stan Altan, Hans Coppenolle
Mixture experimentation is commonly seen in pharmaceutical formulation studies, where the relative proportions of the individual components are modeled for effects on product attributes. The requirement that the sum of the component proportions equals 1 has given rise to the class of designs, known as mixture designs. The first mixture designs were published by Quenouille in 1953 but it took nearly 40 years for the earliest mixture design applications to be published in the pharmaceutical sciences literature by Kettaneh-Wold in 1991 and Waaler in 1992. Since then, the advent of efficient computer algorithms to generate designs has made this class of designs easily accessible to pharmaceutical statisticians, although the use of these designs appears to be an underutilized experimental strategy even today. One goal of this tutorial is to draw the attention of experimental statisticians to this class of designs and their advantages in pursuing formulation studies such as excipient compatibility studies. We present sufficient materials to introduce the novice practitioner to this class of design, associated models, and analysis strategies. An example of a mixture-process variable design is given as a case study.
{"title":"Mixture Experimentation in Pharmaceutical Formulations: A Tutorial.","authors":"Lynne B Hare, Stan Altan, Hans Coppenolle","doi":"10.1002/pst.2426","DOIUrl":"10.1002/pst.2426","url":null,"abstract":"<p><p>Mixture experimentation is commonly seen in pharmaceutical formulation studies, where the relative proportions of the individual components are modeled for effects on product attributes. The requirement that the sum of the component proportions equals 1 has given rise to the class of designs, known as mixture designs. The first mixture designs were published by Quenouille in 1953 but it took nearly 40 years for the earliest mixture design applications to be published in the pharmaceutical sciences literature by Kettaneh-Wold in 1991 and Waaler in 1992. Since then, the advent of efficient computer algorithms to generate designs has made this class of designs easily accessible to pharmaceutical statisticians, although the use of these designs appears to be an underutilized experimental strategy even today. One goal of this tutorial is to draw the attention of experimental statisticians to this class of designs and their advantages in pursuing formulation studies such as excipient compatibility studies. We present sufficient materials to introduce the novice practitioner to this class of design, associated models, and analysis strategies. An example of a mixture-process variable design is given as a case study.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2426"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141894062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-05-19DOI: 10.1002/pst.2397
Jialuo Liu, Dong Xi
Difference in proportions is frequently used to measure treatment effect for binary outcomes in randomized clinical trials. The estimation of difference in proportions can be assisted by adjusting for prognostic baseline covariates to enhance precision and bolster statistical power. Standardization or g-computation is a widely used method for covariate adjustment in estimating unconditional difference in proportions, because of its robustness to model misspecification. Various inference methods have been proposed to quantify the uncertainty and confidence intervals based on large-sample theories. However, their performances under small sample sizes and model misspecification have not been comprehensively evaluated. We propose an alternative approach to estimate the unconditional variance of the standardization estimator based on the robust sandwich estimator to further enhance the finite sample performance. Extensive simulations are provided to demonstrate the performances of the proposed method, spanning a wide range of sample sizes, randomization ratios, and model specification. We apply the proposed method in a real data example to illustrate the practical utility.
比例差常用于衡量随机临床试验中二元结局的治疗效果。对预后基线协变量进行调整可提高差异比例估计的精确度并增强统计能力。标准化或 g 计算是估计无条件差异比例时广泛使用的一种协变量调整方法,因为它对模型错误规范具有稳健性。基于大样本理论,人们提出了各种推断方法来量化不确定性和置信区间。然而,这些方法在小样本量和模型误设情况下的表现尚未得到全面评估。我们提出了一种基于稳健三明治估计器估计标准化估计器无条件方差的替代方法,以进一步提高有限样本性能。我们提供了大量模拟,以证明所提方法在样本大小、随机化比率和模型规范等广泛范围内的性能。我们在一个真实数据示例中应用了所提出的方法,以说明其实用性。
{"title":"Covariate adjustment and estimation of difference in proportions in randomized clinical trials.","authors":"Jialuo Liu, Dong Xi","doi":"10.1002/pst.2397","DOIUrl":"10.1002/pst.2397","url":null,"abstract":"<p><p>Difference in proportions is frequently used to measure treatment effect for binary outcomes in randomized clinical trials. The estimation of difference in proportions can be assisted by adjusting for prognostic baseline covariates to enhance precision and bolster statistical power. Standardization or g-computation is a widely used method for covariate adjustment in estimating unconditional difference in proportions, because of its robustness to model misspecification. Various inference methods have been proposed to quantify the uncertainty and confidence intervals based on large-sample theories. However, their performances under small sample sizes and model misspecification have not been comprehensively evaluated. We propose an alternative approach to estimate the unconditional variance of the standardization estimator based on the robust sandwich estimator to further enhance the finite sample performance. Extensive simulations are provided to demonstrate the performances of the proposed method, spanning a wide range of sample sizes, randomization ratios, and model specification. We apply the proposed method in a real data example to illustrate the practical utility.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"884-905"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141065823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-08-04DOI: 10.1002/pst.2411
Thomas Drury, Juan J Abellan, Nicky Best, Ian R White
The estimands framework outlined in ICH E9 (R1) describes the components needed to precisely define the effects to be estimated in clinical trials, which includes how post-baseline 'intercurrent' events (IEs) are to be handled. In late-stage clinical trials, it is common to handle IEs like 'treatment discontinuation' using the treatment policy strategy and target the treatment effect on outcomes regardless of treatment discontinuation. For continuous repeated measures, this type of effect is often estimated using all observed data before and after discontinuation using either a mixed model for repeated measures (MMRM) or multiple imputation (MI) to handle any missing data. In basic form, both these estimation methods ignore treatment discontinuation in the analysis and therefore may be biased if there are differences in patient outcomes after treatment discontinuation compared with patients still assigned to treatment, and missing data being more common for patients who have discontinued treatment. We therefore propose and evaluate a set of MI models that can accommodate differences between outcomes before and after treatment discontinuation. The models are evaluated in the context of planning a Phase 3 trial for a respiratory disease. We show that analyses ignoring treatment discontinuation can introduce substantial bias and can sometimes underestimate variability. We also show that some of the MI models proposed can successfully correct the bias, but inevitably lead to increases in variance. We conclude that some of the proposed MI models are preferable to the traditional analysis ignoring treatment discontinuation, but the precise choice of MI model will likely depend on the trial design, disease of interest and amount of observed and missing data following treatment discontinuation.
ICH E9 (R1)中概述的估计因素框架描述了在临床试验中精确定义估计效应所需的组成部分,其中包括如何处理基线后 "并发 "事件(IEs)。在后期临床试验中,通常会使用治疗策略来处理 "治疗中断 "等 IEs,并针对治疗对结果的影响,而不管治疗中断与否。对于连续重复测量,通常使用重复测量混合模型(MMRM)或多重估算(MI)来处理任何缺失数据,并使用终止治疗前后的所有观察数据来估算这类效应。在基本形式上,这两种估算方法在分析中都忽略了治疗的中断,因此,如果患者中断治疗后的结果与仍在接受治疗的患者相比存在差异,而且中断治疗的患者缺失数据更常见,那么这两种方法就可能存在偏差。因此,我们提出并评估了一组 MI 模型,这些模型可以考虑治疗中断前后的结果差异。我们在规划一项呼吸系统疾病的 3 期试验时对这些模型进行了评估。我们发现,忽略治疗中断的分析会带来很大的偏差,有时还会低估变异性。我们还表明,提出的一些多元智能模型可以成功纠正偏差,但不可避免地会导致变异性增加。我们的结论是,一些建议的 MI 模型优于忽略治疗中断的传统分析,但 MI 模型的准确选择可能取决于试验设计、感兴趣的疾病以及治疗中断后的观察数据和缺失数据的数量。
{"title":"Estimation of Treatment Policy Estimands for Continuous Outcomes Using Off-Treatment Sequential Multiple Imputation.","authors":"Thomas Drury, Juan J Abellan, Nicky Best, Ian R White","doi":"10.1002/pst.2411","DOIUrl":"10.1002/pst.2411","url":null,"abstract":"<p><p>The estimands framework outlined in ICH E9 (R1) describes the components needed to precisely define the effects to be estimated in clinical trials, which includes how post-baseline 'intercurrent' events (IEs) are to be handled. In late-stage clinical trials, it is common to handle IEs like 'treatment discontinuation' using the treatment policy strategy and target the treatment effect on outcomes regardless of treatment discontinuation. For continuous repeated measures, this type of effect is often estimated using all observed data before and after discontinuation using either a mixed model for repeated measures (MMRM) or multiple imputation (MI) to handle any missing data. In basic form, both these estimation methods ignore treatment discontinuation in the analysis and therefore may be biased if there are differences in patient outcomes after treatment discontinuation compared with patients still assigned to treatment, and missing data being more common for patients who have discontinued treatment. We therefore propose and evaluate a set of MI models that can accommodate differences between outcomes before and after treatment discontinuation. The models are evaluated in the context of planning a Phase 3 trial for a respiratory disease. We show that analyses ignoring treatment discontinuation can introduce substantial bias and can sometimes underestimate variability. We also show that some of the MI models proposed can successfully correct the bias, but inevitably lead to increases in variance. We conclude that some of the proposed MI models are preferable to the traditional analysis ignoring treatment discontinuation, but the precise choice of MI model will likely depend on the trial design, disease of interest and amount of observed and missing data following treatment discontinuation.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1144-1155"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602932/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141889907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-25DOI: 10.1002/pst.2409
G M Hair, T Jemielita, S Mt-Isa, P M Schnell, R Baumgartner
Subgroup analysis may be used to investigate treatment effect heterogeneity among subsets of the study population defined by baseline characteristics. Several methodologies have been proposed in recent years and with these, statistical issues such as multiplicity, complexity, and selection bias have been widely discussed. Some methods adjust for one or more of these issues; however, few of them discuss or consider the stability of the subgroup assignments. We propose exploring the stability of subgroups as a sensitivity analysis step for stratified medicine to assess the robustness of the identified subgroups besides identifying possible factors that may drive this instability. After applying Bayesian credible subgroups, a nonparametric bootstrap can be used to assess stability at subgroup-level and patient-level. Our findings illustrate that when the treatment effect is small or not so evident, patients are more likely to switch to different subgroups (jumpers) across bootstrap resamples. In contrast, when the treatment effect is large or extremely convincing, patients generally remain in the same subgroup. While the proposed subgroup stability method is illustrated through Bayesian credible subgroups method on time-to-event data, this general approach can be used with other subgroup identification methods and endpoints.
{"title":"Investigating Stability in Subgroup Identification for Stratified Medicine.","authors":"G M Hair, T Jemielita, S Mt-Isa, P M Schnell, R Baumgartner","doi":"10.1002/pst.2409","DOIUrl":"10.1002/pst.2409","url":null,"abstract":"<p><p>Subgroup analysis may be used to investigate treatment effect heterogeneity among subsets of the study population defined by baseline characteristics. Several methodologies have been proposed in recent years and with these, statistical issues such as multiplicity, complexity, and selection bias have been widely discussed. Some methods adjust for one or more of these issues; however, few of them discuss or consider the stability of the subgroup assignments. We propose exploring the stability of subgroups as a sensitivity analysis step for stratified medicine to assess the robustness of the identified subgroups besides identifying possible factors that may drive this instability. After applying Bayesian credible subgroups, a nonparametric bootstrap can be used to assess stability at subgroup-level and patient-level. Our findings illustrate that when the treatment effect is small or not so evident, patients are more likely to switch to different subgroups (jumpers) across bootstrap resamples. In contrast, when the treatment effect is large or extremely convincing, patients generally remain in the same subgroup. While the proposed subgroup stability method is illustrated through Bayesian credible subgroups method on time-to-event data, this general approach can be used with other subgroup identification methods and endpoints.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"945-958"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141458676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In clinical trials with time-to-event data, the evaluation of treatment efficacy can be a long and complex process, especially when considering long-term primary endpoints. Using surrogate endpoints to correlate the primary endpoint has become a common practice to accelerate decision-making. Moreover, the ethical need to minimize sample size and the practical need to optimize available resources have encouraged the scientific community to develop methodologies that leverage historical data. Relying on the general theory of group sequential design and using a Bayesian framework, the methodology described in this paper exploits a documented historical relationship between a clinical "final" endpoint and a surrogate endpoint to build an informative prior for the primary endpoint, using surrogate data from an early interim analysis of the clinical trial. The predictive probability of success of the trial is then used to define a futility-stopping rule. The methodology demonstrates substantial enhancements in trial operating characteristics when there is a good agreement between current and historical data. Furthermore, incorporating a robust approach that combines the surrogate prior with a vague component mitigates the impact of the minor prior-data conflicts while maintaining acceptable performance even in the presence of significant prior-data conflicts. The proposed methodology was applied to design a Phase III clinical trial in metastatic colorectal cancer, with overall survival as the primary endpoint and progression-free survival as the surrogate endpoint.
在使用时间到事件数据的临床试验中,疗效评估可能是一个漫长而复杂的过程,尤其是在考虑长期主要终点时。使用替代终点来关联主要终点已成为加快决策的一种常见做法。此外,尽量减少样本量的道德需求和优化可用资源的实际需求也促使科学界开发出利用历史数据的方法。本文介绍的方法以分组序列设计的一般理论为基础,采用贝叶斯框架,利用临床 "最终 "终点与代用终点之间有据可查的历史关系,利用临床试验早期中期分析的代用数据,为主要终点建立一个信息先验。然后,利用试验成功的预测概率来定义徒劳性终止规则。该方法表明,当当前数据与历史数据高度一致时,试验运行特征会得到大幅提升。此外,将代理先验与模糊成分相结合的稳健方法减轻了轻微先验数据冲突的影响,同时即使存在严重的先验数据冲突,也能保持可接受的性能。所提出的方法被应用于设计转移性结直肠癌的 III 期临床试验,以总生存期为主要终点,无进展生存期为替代终点。
{"title":"Futility Interim Analysis Based on Probability of Success Using a Surrogate Endpoint.","authors":"Ronan Fougeray, Loïck Vidot, Marco Ratta, Zhaoyang Teng, Donia Skanji, Gaëlle Saint-Hilary","doi":"10.1002/pst.2410","DOIUrl":"10.1002/pst.2410","url":null,"abstract":"<p><p>In clinical trials with time-to-event data, the evaluation of treatment efficacy can be a long and complex process, especially when considering long-term primary endpoints. Using surrogate endpoints to correlate the primary endpoint has become a common practice to accelerate decision-making. Moreover, the ethical need to minimize sample size and the practical need to optimize available resources have encouraged the scientific community to develop methodologies that leverage historical data. Relying on the general theory of group sequential design and using a Bayesian framework, the methodology described in this paper exploits a documented historical relationship between a clinical \"final\" endpoint and a surrogate endpoint to build an informative prior for the primary endpoint, using surrogate data from an early interim analysis of the clinical trial. The predictive probability of success of the trial is then used to define a futility-stopping rule. The methodology demonstrates substantial enhancements in trial operating characteristics when there is a good agreement between current and historical data. Furthermore, incorporating a robust approach that combines the surrogate prior with a vague component mitigates the impact of the minor prior-data conflicts while maintaining acceptable performance even in the presence of significant prior-data conflicts. The proposed methodology was applied to design a Phase III clinical trial in metastatic colorectal cancer, with overall survival as the primary endpoint and progression-free survival as the surrogate endpoint.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"971-983"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141492960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-07-07DOI: 10.1002/pst.2415
Federico Bonofiglio
Cox regression and Kaplan-Meier estimations are often needed in clinical research and this requires access to individual patient data (IPD). However, IPD cannot always be shared because of privacy or proprietary restrictions, which complicates the making of such estimations. We propose a method that generates pseudodata replacing the IPD by only sharing non-disclosive aggregates such as IPD marginal moments and a correlation matrix. Such aggregates are collected by a central computer and input as parameters to a Gaussian copula (GC) that generates the pseudodata. Survival inferences are computed on the pseudodata as if it were the IPD. Using practical examples we demonstrate the utility of the method, via the amount of IPD inferential content recoverable by the GC. We compare GC to a summary-based meta-analysis and an IPD bootstrap distributed across several centers. Other pseudodata approaches are also considered. In the empirical results, GC approximates the utility of the IPD bootstrap although it might yield more conservative inferences and it might have limitations in subgroup analyses. Overall, GC avoids many legal problems related to IPD privacy or property while enabling approximation of common IPD survival analyses otherwise difficult to conduct. Sharing more IPD aggregates than is currently practiced could facilitate "second purpose"-research and relax concerns regarding IPD access.
{"title":"Survival Analysis Without Sharing of Individual Patient Data by Using a Gaussian Copula.","authors":"Federico Bonofiglio","doi":"10.1002/pst.2415","DOIUrl":"10.1002/pst.2415","url":null,"abstract":"<p><p>Cox regression and Kaplan-Meier estimations are often needed in clinical research and this requires access to individual patient data (IPD). However, IPD cannot always be shared because of privacy or proprietary restrictions, which complicates the making of such estimations. We propose a method that generates pseudodata replacing the IPD by only sharing non-disclosive aggregates such as IPD marginal moments and a correlation matrix. Such aggregates are collected by a central computer and input as parameters to a Gaussian copula (GC) that generates the pseudodata. Survival inferences are computed on the pseudodata as if it were the IPD. Using practical examples we demonstrate the utility of the method, via the amount of IPD inferential content recoverable by the GC. We compare GC to a summary-based meta-analysis and an IPD bootstrap distributed across several centers. Other pseudodata approaches are also considered. In the empirical results, GC approximates the utility of the IPD bootstrap although it might yield more conservative inferences and it might have limitations in subgroup analyses. Overall, GC avoids many legal problems related to IPD privacy or property while enabling approximation of common IPD survival analyses otherwise difficult to conduct. Sharing more IPD aggregates than is currently practiced could facilitate \"second purpose\"-research and relax concerns regarding IPD access.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1031-1044"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141555242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-08-09DOI: 10.1002/pst.2427
J C Poythress, Jin Hyung Lee, Kentaro Takeda, Jun Liu
In alignment with the ICH guideline for Good Clinical Practice [ICH E6(R2)], quality tolerance limit (QTL) monitoring has become a standard component of risk-based monitoring of clinical trials by sponsor companies. Parameters that are candidates for QTL monitoring are critical to participant safety and quality of trial results. Breaching the QTL of a given parameter could indicate systematic issues with the trial that could impact participant safety or compromise the reliability of trial results. Methods for QTL monitoring should detect potential QTL breaches as early as possible while limiting the rate of false alarms. Early detection allows for the implementation of remedial actions that can prevent a QTL breach at the end of the trial. We demonstrate that statistically based methods that account for the expected value and variability of the data generating process outperform simple methods based on fixed thresholds with respect to important operating characteristics. We also propose a Bayesian method for QTL monitoring and an extension that allows for the incorporation of partial information, demonstrating its potential to outperform frequentist methods originating from the statistical process control literature.
{"title":"Bayesian Methods for Quality Tolerance Limit (QTL) Monitoring.","authors":"J C Poythress, Jin Hyung Lee, Kentaro Takeda, Jun Liu","doi":"10.1002/pst.2427","DOIUrl":"10.1002/pst.2427","url":null,"abstract":"<p><p>In alignment with the ICH guideline for Good Clinical Practice [ICH E6(R2)], quality tolerance limit (QTL) monitoring has become a standard component of risk-based monitoring of clinical trials by sponsor companies. Parameters that are candidates for QTL monitoring are critical to participant safety and quality of trial results. Breaching the QTL of a given parameter could indicate systematic issues with the trial that could impact participant safety or compromise the reliability of trial results. Methods for QTL monitoring should detect potential QTL breaches as early as possible while limiting the rate of false alarms. Early detection allows for the implementation of remedial actions that can prevent a QTL breach at the end of the trial. We demonstrate that statistically based methods that account for the expected value and variability of the data generating process outperform simple methods based on fixed thresholds with respect to important operating characteristics. We also propose a Bayesian method for QTL monitoring and an extension that allows for the incorporation of partial information, demonstrating its potential to outperform frequentist methods originating from the statistical process control literature.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1166-1180"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141907380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-07-16DOI: 10.1002/pst.2423
Hyungwoo Kim, Seungpil Jung, Yudi Pawitan, Woojoo Lee
Finding an adequate dose of the drug by revealing the dose-response relationship is very crucial and a challenging problem in the clinical development. The main concerns in dose-finding study are to identify a minimum effective dose (MED) in anesthesia studies and maximum tolerated dose (MTD) in oncology clinical trials. For the estimation of MED and MTD, we propose two modifications of Firth's logistic regression using reparametrization, called reparametrized Firth's logistic regression (rFLR) and ridge-penalized reparametrized Firth's logistic regression (RrFLR). The proposed methods are designed by directly reducing the small-sample bias of the maximum likelihood estimate for the parameter of interest. In addition, we develop a method on how to construct confidence intervals for rFLR and RrFLR using profile penalized likelihood. In the up-and-down biased-coin design, numerical studies confirm the superior performance of the proposed methods in terms of the mean squared error, bias, and coverage accuracy of confidence intervals.
{"title":"Reparametrized Firth's Logistic Regressions for Dose-Finding Study With the Biased-Coin Design.","authors":"Hyungwoo Kim, Seungpil Jung, Yudi Pawitan, Woojoo Lee","doi":"10.1002/pst.2423","DOIUrl":"10.1002/pst.2423","url":null,"abstract":"<p><p>Finding an adequate dose of the drug by revealing the dose-response relationship is very crucial and a challenging problem in the clinical development. The main concerns in dose-finding study are to identify a minimum effective dose (MED) in anesthesia studies and maximum tolerated dose (MTD) in oncology clinical trials. For the estimation of MED and MTD, we propose two modifications of Firth's logistic regression using reparametrization, called reparametrized Firth's logistic regression (rFLR) and ridge-penalized reparametrized Firth's logistic regression (RrFLR). The proposed methods are designed by directly reducing the small-sample bias of the maximum likelihood estimate for the parameter of interest. In addition, we develop a method on how to construct confidence intervals for rFLR and RrFLR using profile penalized likelihood. In the up-and-down biased-coin design, numerical studies confirm the superior performance of the proposed methods in terms of the mean squared error, bias, and coverage accuracy of confidence intervals.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1117-1127"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141627326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-07-07DOI: 10.1002/pst.2413
Jia Wang, Lili Tian
In practice, we often encounter binary classification problems where both main classes consist of multiple subclasses. For example, in an ovarian cancer study where biomarkers were evaluated for their accuracy of distinguishing noncancer cases from cancer cases, the noncancer class consists of healthy subjects and benign cases, while the cancer class consists of subjects at both early and late stages. This article aims to provide a large number of optimal cut-point selection methods for such setting. Furthermore, we also study confidence interval estimation of the optimal cut-points. Simulation studies are carried out to explore the performance of the proposed cut-point selection methods as well as confidence interval estimation methods. A real ovarian cancer data set is analyzed using the proposed methods.
{"title":"Optimal Cut-Point Selection Methods Under Binary Classification When Subclasses Are Involved.","authors":"Jia Wang, Lili Tian","doi":"10.1002/pst.2413","DOIUrl":"10.1002/pst.2413","url":null,"abstract":"<p><p>In practice, we often encounter binary classification problems where both main classes consist of multiple subclasses. For example, in an ovarian cancer study where biomarkers were evaluated for their accuracy of distinguishing noncancer cases from cancer cases, the noncancer class consists of healthy subjects and benign cases, while the cancer class consists of subjects at both early and late stages. This article aims to provide a large number of optimal cut-point selection methods for such setting. Furthermore, we also study confidence interval estimation of the optimal cut-points. Simulation studies are carried out to explore the performance of the proposed cut-point selection methods as well as confidence interval estimation methods. A real ovarian cancer data set is analyzed using the proposed methods.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"984-1030"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141555330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}