Pub Date : 2025-01-01Epub Date: 2024-02-13DOI: 10.1002/pst.2370
Sarah Janssen
Immunoassays play an important role in drug development of products targeting the immune system. Consistent quality of the results from an immunoassay is essential to make unbiased and accurate claims about the drug product during preclinical and clinical development stages. Assay qualification and validation shed light on the performance of the assay. It is the first evaluation and the verification, respectively, of the assay's performance. This tutorial explains and illustrates the calculation methodology for important assay qualification parameters including precision, relative accuracy, linearity, the lower limit of quantification (LLOQ), the upper limit of quantification (ULOQ), the assay range and dilutability. This tutorial focuses on assays used for (pre-) clinical purposes, characterized by a lognormal distribution of the measurements on its original untransformed scale and by the lack of well characterized reference material. Statistical calculations are illustrated with qualification data from an enzyme-linked immunosorbent assay (ELISA) vaccine immunoassay.
{"title":"Introduction to qualification and validation of an immunoassay.","authors":"Sarah Janssen","doi":"10.1002/pst.2370","DOIUrl":"10.1002/pst.2370","url":null,"abstract":"<p><p>Immunoassays play an important role in drug development of products targeting the immune system. Consistent quality of the results from an immunoassay is essential to make unbiased and accurate claims about the drug product during preclinical and clinical development stages. Assay qualification and validation shed light on the performance of the assay. It is the first evaluation and the verification, respectively, of the assay's performance. This tutorial explains and illustrates the calculation methodology for important assay qualification parameters including precision, relative accuracy, linearity, the lower limit of quantification (LLOQ), the upper limit of quantification (ULOQ), the assay range and dilutability. This tutorial focuses on assays used for (pre-) clinical purposes, characterized by a lognormal distribution of the measurements on its original untransformed scale and by the lack of well characterized reference material. Statistical calculations are illustrated with qualification data from an enzyme-linked immunosorbent assay (ELISA) vaccine immunoassay.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2370"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139730285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-06DOI: 10.1002/pst.2422
Gina D'Angelo, Di Ran
Preclinical studies are broad and can encompass cellular research, animal trials, and small human trials. Preclinical studies tend to be exploratory and have smaller datasets that often consist of biomarker data. Logistic regression is typically the model of choice for modeling a binary outcome with explanatory variables such as genetic, imaging, and clinical data. Small preclinical studies can have challenging data that may include a complete separation or quasi-complete separation issue that will result in logistic regression inflated coefficient estimates and standard errors. Penalized regression approaches such as Firth's logistic regression are a solution to reduce the bias in the estimates. In this tutorial, a number of examples with separation (complete or quasi-complete) are illustrated and the results from both logistic regression and Firth's logistic regression are compared to demonstrate the inflated estimates from the standard logistic regression model and bias-reduction of the estimates from the penalized Firth's approach. R code and datasets are provided in the supplement.
临床前研究的范围很广,可以包括细胞研究、动物试验和小型人体试验。临床前研究往往是探索性的,数据集较小,通常由生物标记物数据组成。逻辑回归通常是二元结果建模的首选模型,其解释变量包括基因、成像和临床数据。小型临床前研究的数据可能具有挑战性,其中可能包括完全分离或准完全分离问题,这将导致逻辑回归膨胀的系数估计值和标准误差。Firth逻辑回归等惩罚回归方法是减少估计值偏差的一种解决方案。本教程将举例说明一些分离(完全或准完全)的例子,并对逻辑回归和 Firth 逻辑回归的结果进行比较,以展示标准逻辑回归模型的估计值膨胀和 Firth 惩罚回归方法的估计值偏差减小。附录中提供了 R 代码和数据集。
{"title":"Tutorial on Firth's Logistic Regression Models for Biomarkers in Preclinical Space.","authors":"Gina D'Angelo, Di Ran","doi":"10.1002/pst.2422","DOIUrl":"10.1002/pst.2422","url":null,"abstract":"<p><p>Preclinical studies are broad and can encompass cellular research, animal trials, and small human trials. Preclinical studies tend to be exploratory and have smaller datasets that often consist of biomarker data. Logistic regression is typically the model of choice for modeling a binary outcome with explanatory variables such as genetic, imaging, and clinical data. Small preclinical studies can have challenging data that may include a complete separation or quasi-complete separation issue that will result in logistic regression inflated coefficient estimates and standard errors. Penalized regression approaches such as Firth's logistic regression are a solution to reduce the bias in the estimates. In this tutorial, a number of examples with separation (complete or quasi-complete) are illustrated and the results from both logistic regression and Firth's logistic regression are compared to demonstrate the inflated estimates from the standard logistic regression model and bias-reduction of the estimates from the penalized Firth's approach. R code and datasets are provided in the supplement.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2422"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141897985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonios Daletzakis, Kit C B Roes, Marianne A Jonker
The duration of response (DoR) is defined as the time from the onset of response to treatment up to progression of disease or death due to any reason, whichever occurs earlier. The expected DoR could be a suitable estimand to measure the efficacy of a treatment but is in practice difficult to estimate, since patients' follow-up times are often right-censored. Instead, the restricted mean duration of response (RMDoR) is often used. The RMDoR in a time is equal to the expected DoR restricted to the interval . In this paper, we consider the behaviour of the RMDoR as a function of and its suitability as a measure to quantify the efficacy of a treatment. Besides, we focus on the estimation of the RMDoR. In oncology, the events response to treatment and progression of disease are typically detected through time-scheduled scans and are therefore interval-censored. We describe multiple estimators for the RMDoR that deal with the interval censoring in different ways and study the performance of these estimators in single arm trials and randomised controlled trials.
反应持续时间(DoR)定义为从开始对治疗产生反应到疾病进展或因任何原因死亡(以较早者为准)的时间。预期DoR可能是衡量治疗效果的合适估计,但在实践中很难估计,因为患者的随访时间通常是正确审查的。相反,通常使用受限平均反应持续时间(RMDoR)。在时间τ $$ tau $$中的RMDoR等于限制于区间0 τ $$ left[0,tau right] $$的期望DoR。在本文中,我们将RMDoR的行为视为τ $$ tau $$的函数及其作为量化治疗效果的度量的适用性。此外,我们还重点研究了RMDoR的估计。在肿瘤学中,对治疗的反应和疾病的进展通常是通过定时扫描来检测的,因此是间隔审查的。我们描述了RMDoR的多个估计器,这些估计器以不同的方式处理区间审查,并研究了这些估计器在单臂试验和随机对照试验中的性能。
{"title":"Estimation of the Restricted Mean Duration of Response (RMDoR) in Oncology.","authors":"Antonios Daletzakis, Kit C B Roes, Marianne A Jonker","doi":"10.1002/pst.2468","DOIUrl":"10.1002/pst.2468","url":null,"abstract":"<p><p>The duration of response (DoR) is defined as the time from the onset of response to treatment up to progression of disease or death due to any reason, whichever occurs earlier. The expected DoR could be a suitable estimand to measure the efficacy of a treatment but is in practice difficult to estimate, since patients' follow-up times are often right-censored. Instead, the restricted mean duration of response (RMDoR) is often used. The RMDoR in a time <math> <semantics><mrow><mi>τ</mi></mrow> <annotation>$$ tau $$</annotation></semantics> </math> is equal to the expected DoR restricted to the interval <math> <semantics> <mrow><mfenced><mn>0</mn> <mi>τ</mi></mfenced> </mrow> <annotation>$$ left[0,tau right] $$</annotation></semantics> </math> . In this paper, we consider the behaviour of the RMDoR as a function of <math> <semantics><mrow><mi>τ</mi></mrow> <annotation>$$ tau $$</annotation></semantics> </math> and its suitability as a measure to quantify the efficacy of a treatment. Besides, we focus on the estimation of the RMDoR. In oncology, the events response to treatment and progression of disease are typically detected through time-scheduled scans and are therefore interval-censored. We describe multiple estimators for the RMDoR that deal with the interval censoring in different ways and study the performance of these estimators in single arm trials and randomised controlled trials.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 1","pages":"e2468"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11803436/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143364545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jintong Hou, Leslie A McClure, Savina Jaeger, Lucy F Robinson
Clinical endpoints based on repeated measurements arise in many clinical research studies, and require specialized methods for sample size and power calculations. In clinical trials that measure counts over time, such as bleeding events in hemophilia, the dispersion of their distributions might change upon treatment and the measurements might be correlated. The generalized estimating equations (GEE) approach has been widely used for modeling correlated data and comparing rates. In this paper, we investigate the properties of GEE when applied to count outcomes with changes in dispersion. We derive general closed-form formulas to estimate sample size when the dispersion parameters and distributions of count data vary across two correlated measurements based on the GEE approach. These formulas allow for power and sample size estimation for intra-participant comparison of rates before and after an intervention, randomized controlled trials with equal allocation, or matched pairs designs. These formulas are derived for the following distributions: Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial distributions, and do not assume that measurements before and after an intervention come from the same distribution. Furthermore, we propose modified methods for estimating sample size and confidence intervals for the negative binomial distributions to overcome Type I error inflation, which is especially useful for large changes in the negative binomial dispersion parameter. We perform simulations, and evaluate the performance of the empirical power and Type I error over a range of parameters. Applications and R functions implementing the methods are also provided.
{"title":"Sample Size Estimation for Correlated Count Data With Changes in Dispersion.","authors":"Jintong Hou, Leslie A McClure, Savina Jaeger, Lucy F Robinson","doi":"10.1002/pst.2469","DOIUrl":"10.1002/pst.2469","url":null,"abstract":"<p><p>Clinical endpoints based on repeated measurements arise in many clinical research studies, and require specialized methods for sample size and power calculations. In clinical trials that measure counts over time, such as bleeding events in hemophilia, the dispersion of their distributions might change upon treatment and the measurements might be correlated. The generalized estimating equations (GEE) approach has been widely used for modeling correlated data and comparing rates. In this paper, we investigate the properties of GEE when applied to count outcomes with changes in dispersion. We derive general closed-form formulas to estimate sample size when the dispersion parameters and distributions of count data vary across two correlated measurements based on the GEE approach. These formulas allow for power and sample size estimation for intra-participant comparison of rates before and after an intervention, randomized controlled trials with equal allocation, or matched pairs designs. These formulas are derived for the following distributions: Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial distributions, and do not assume that measurements before and after an intervention come from the same distribution. Furthermore, we propose modified methods for estimating sample size and confidence intervals for the negative binomial distributions to overcome Type I error inflation, which is especially useful for large changes in the negative binomial dispersion parameter. We perform simulations, and evaluate the performance of the empirical power and Type I error over a range of parameters. Applications and R functions implementing the methods are also provided.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 1","pages":"e2469"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143189784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-18DOI: 10.1002/pst.2429
Palash Sharma, Milind A Phadnis
Stochastic curtailment tests for Phase II two-arm trials with time-to-event end points are traditionally performed using the log-rank test. Recent advances in designing time-to-event trials have utilized the Weibull distribution with a known shape parameter estimated from historical studies. As sample size calculations depend on the value of this shape parameter, these methods either cannot be used or likely underperform/overperform when the natural variation around the point estimate is ignored. We demonstrate that when the magnitude of the Weibull shape parameters changes, unblinded interim information on the shape of the survival curves can be useful to enrich the final analysis for reestimation of the sample size. For such scenarios, we propose two Bayesian solutions to estimate the natural variations of the Weibull shape parameter. We implement these approaches under the framework of the newly proposed relative time method that allows nonproportional hazards and nonproportional time. We also demonstrate the sample size reestimation for the relative time method using three different approaches (internal pilot study approach, conditional power, and predictive power approach) at the interim stage of the trial. We demonstrate our methods using a hypothetical example and provide insights regarding the practical constraints for the proposed methods.
{"title":"Sample Size Reestimation in Stochastic Curtailment Tests With Time-to-Events Outcome in the Case of Nonproportional Hazards Utilizing Two Weibull Distributions With Unknown Shape Parameters.","authors":"Palash Sharma, Milind A Phadnis","doi":"10.1002/pst.2429","DOIUrl":"10.1002/pst.2429","url":null,"abstract":"<p><p>Stochastic curtailment tests for Phase II two-arm trials with time-to-event end points are traditionally performed using the log-rank test. Recent advances in designing time-to-event trials have utilized the Weibull distribution with a known shape parameter estimated from historical studies. As sample size calculations depend on the value of this shape parameter, these methods either cannot be used or likely underperform/overperform when the natural variation around the point estimate is ignored. We demonstrate that when the magnitude of the Weibull shape parameters changes, unblinded interim information on the shape of the survival curves can be useful to enrich the final analysis for reestimation of the sample size. For such scenarios, we propose two Bayesian solutions to estimate the natural variations of the Weibull shape parameter. We implement these approaches under the framework of the newly proposed relative time method that allows nonproportional hazards and nonproportional time. We also demonstrate the sample size reestimation for the relative time method using three different approaches (internal pilot study approach, conditional power, and predictive power approach) at the interim stage of the trial. We demonstrate our methods using a hypothetical example and provide insights regarding the practical constraints for the proposed methods.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2429"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11788936/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142000525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In compound hit screening, an important chemical property is target binding affinity, represented by a parameter ΔΔG. You can measure ΔΔG experimentally (ΔΔGexp) or by calculations via simulations (ΔΔGcalc). Because it is expensive to measure ΔΔG experimentally, only a few experimental runs are performed. The relationship between the experimental data and the calculated results is a straight line with a slope that is not necessarily one. The goal is to estimate the linear relationship between ΔΔGexp and ΔΔGcalc by fitting a Deming regression model that will be used to predict future values of ΔΔGtrue based on the obtained ΔΔGcalc.
{"title":"Estimating the Strength of Binding Affinity via Delta-Delta-G for Hit Screening After a Deming Regression Calibration.","authors":"Kanaka Tatikola, Javier Cabrera","doi":"10.1002/pst.2460","DOIUrl":"10.1002/pst.2460","url":null,"abstract":"<p><p>In compound hit screening, an important chemical property is target binding affinity, represented by a parameter ΔΔG. You can measure ΔΔG experimentally (ΔΔG<sub>exp</sub>) or by calculations via simulations (ΔΔG<sub>calc</sub>). Because it is expensive to measure ΔΔG experimentally, only a few experimental runs are performed. The relationship between the experimental data and the calculated results is a straight line with a slope that is not necessarily one. The goal is to estimate the linear relationship between ΔΔG<sub>exp</sub> and ΔΔG<sub>calc</sub> by fitting a Deming regression model that will be used to predict future values of ΔΔG<sub>true</sub> based on the obtained ΔΔG<sub>calc</sub>.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 1","pages":"e2460"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143189639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In oncology, Phase II studies are crucial for clinical development plans as such studies identify potent agents with sufficient activity to continue development in the subsequent Phase III trials. Traditionally, Phase II studies are single-arm studies, with the primary endpoint being short-term treatment efficacy. However, drug safety is also an important consideration. In the context of such multiple-outcome designs, predictive probability-based Bayesian monitoring strategies have been developed to assess whether a clinical trial will provide enough evidence to continue with a Phase III study at the scheduled end of the trial. Therefore, we propose a new simple index vector to summarize the results that cannot be captured by existing strategies. Specifically, we define the worst and most promising situations for the potential effect of a treatment, then use the proposed index vector to measure the deviation between the two situations. Finally, simulation studies are performed to evaluate the operating characteristics of the design. The obtained results demonstrate that the proposed method makes appropriate interim go/no-go decisions.
在肿瘤学领域,II 期研究对临床开发计划至关重要,因为这类研究可以确定具有足够活性的强效制剂,以便在随后的 III 期试验中继续开发。传统上,II 期研究是单臂研究,主要终点是短期疗效。然而,药物安全性也是一个重要的考虑因素。在这种多结果设计的背景下,人们开发了基于预测概率的贝叶斯监测策略,以评估临床试验是否能提供足够的证据,从而在预定试验结束时继续进行 III 期研究。因此,我们提出了一种新的简单指数向量来总结现有策略无法捕捉的结果。具体来说,我们定义了治疗潜在效果最差和最有希望的两种情况,然后使用提出的指数向量来衡量两种情况之间的偏差。最后,我们进行了模拟研究,以评估设计的运行特性。结果表明,建议的方法能做出适当的 "去/不去 "临时决策。
{"title":"Bayesian Predictive Probability Based on a Bivariate Index Vector for Single-Arm Phase II Study With Binary Efficacy and Safety Endpoints.","authors":"Takuya Yoshimoto, Satoru Shinoda, Kouji Yamamoto, Kouji Tahata","doi":"10.1002/pst.2431","DOIUrl":"10.1002/pst.2431","url":null,"abstract":"<p><p>In oncology, Phase II studies are crucial for clinical development plans as such studies identify potent agents with sufficient activity to continue development in the subsequent Phase III trials. Traditionally, Phase II studies are single-arm studies, with the primary endpoint being short-term treatment efficacy. However, drug safety is also an important consideration. In the context of such multiple-outcome designs, predictive probability-based Bayesian monitoring strategies have been developed to assess whether a clinical trial will provide enough evidence to continue with a Phase III study at the scheduled end of the trial. Therefore, we propose a new simple index vector to summarize the results that cannot be captured by existing strategies. Specifically, we define the worst and most promising situations for the potential effect of a treatment, then use the proposed index vector to measure the deviation between the two situations. Finally, simulation studies are performed to evaluate the operating characteristics of the design. The obtained results demonstrate that the proposed method makes appropriate interim go/no-go decisions.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2431"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141976329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-07-10DOI: 10.1002/pst.2420
Timothy Schofield
Chemistry, manufacturing, and control (CMC) statisticians play a key role in the development and lifecycle management of pharmaceutical and biological products, working with their non-statistician partners to manage product quality. Information used to make quality decisions comes from studies, where success is facilitated through adherence to the scientific method. This is carried out in four steps: (1) an objective, (2) design, (3) conduct, and (4) analysis. Careful consideration of each step helps to ensure that a study conclusion and associated decision is correct. This can be a development decision related to the validity of an assay or a quality decision like conformance to specifications. Importantly, all decisions are made with risk. Conventional statistical risks such as Type 1 and Type 2 errors can be coupled with associated impacts to manage patient value as well as development and commercial costs. The CMC statistician brings focus on managing risk across the steps of the scientific method, leading to optimal product development and robust supply of life saving drugs and biologicals.
{"title":"The Role of CMC Statisticians: Co-Practitioners of the Scientific Method.","authors":"Timothy Schofield","doi":"10.1002/pst.2420","DOIUrl":"10.1002/pst.2420","url":null,"abstract":"<p><p>Chemistry, manufacturing, and control (CMC) statisticians play a key role in the development and lifecycle management of pharmaceutical and biological products, working with their non-statistician partners to manage product quality. Information used to make quality decisions comes from studies, where success is facilitated through adherence to the scientific method. This is carried out in four steps: (1) an objective, (2) design, (3) conduct, and (4) analysis. Careful consideration of each step helps to ensure that a study conclusion and associated decision is correct. This can be a development decision related to the validity of an assay or a quality decision like conformance to specifications. Importantly, all decisions are made with risk. Conventional statistical risks such as Type 1 and Type 2 errors can be coupled with associated impacts to manage patient value as well as development and commercial costs. The CMC statistician brings focus on managing risk across the steps of the scientific method, leading to optimal product development and robust supply of life saving drugs and biologicals.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2420"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141580474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-04-02DOI: 10.1002/pst.2383
Elli Makariadou, Xuechen Wang, Nicholas Hein, Negera W Deresa, Kathy Mutambanengwe, Bie Verbist, Olivier Thas
Combination treatments have been of increasing importance in drug development across therapeutic areas to improve treatment response, minimize the development of resistance, and/or minimize adverse events. Pre-clinical in-vitro combination experiments aim to explore the potential of such drug combinations during drug discovery by comparing the observed effect of the combination with the expected treatment effect under the assumption of no interaction (i.e., null model). This tutorial will address important design aspects of such experiments to allow proper statistical evaluation. Additionally, it will highlight the Biochemically Intuitive Generalized Loewe methodology (BIGL R package available on CRAN) to statistically detect deviations from the expectation under different null models. A clear advantage of the methodology is the quantification of the effect sizes, together with confidence interval while controlling the directional false coverage rate. Finally, a case study will showcase the workflow in analyzing combination experiments.
在各治疗领域的药物研发中,联合疗法的重要性与日俱增,它可以改善治疗反应,最大限度地减少耐药性的产生,和/或最大限度地减少不良反应。临床前体外联合实验旨在通过比较联合治疗的观察效果和无相互作用假设(即无效模型)下的预期治疗效果,在药物研发过程中探索此类药物联合治疗的潜力。本教程将讨论此类实验的重要设计方面,以便进行适当的统计评估。此外,它还将重点介绍生化直观广义卢韦法(BIGL R 软件包,可在 CRAN 上下载),用于统计检测不同无效模型下的预期偏差。该方法的一个明显优势是可以量化效应大小和置信区间,同时控制方向性错误覆盖率。最后,一个案例研究将展示分析组合实验的工作流程。
{"title":"Synergy detection: A practical guide to statistical assessment of potential drug combinations.","authors":"Elli Makariadou, Xuechen Wang, Nicholas Hein, Negera W Deresa, Kathy Mutambanengwe, Bie Verbist, Olivier Thas","doi":"10.1002/pst.2383","DOIUrl":"10.1002/pst.2383","url":null,"abstract":"<p><p>Combination treatments have been of increasing importance in drug development across therapeutic areas to improve treatment response, minimize the development of resistance, and/or minimize adverse events. Pre-clinical in-vitro combination experiments aim to explore the potential of such drug combinations during drug discovery by comparing the observed effect of the combination with the expected treatment effect under the assumption of no interaction (i.e., null model). This tutorial will address important design aspects of such experiments to allow proper statistical evaluation. Additionally, it will highlight the Biochemically Intuitive Generalized Loewe methodology (BIGL R package available on CRAN) to statistically detect deviations from the expectation under different null models. A clear advantage of the methodology is the quantification of the effect sizes, together with confidence interval while controlling the directional false coverage rate. Finally, a case study will showcase the workflow in analyzing combination experiments.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2383"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140336499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-10-16DOI: 10.1002/pst.2438
Yilong Zhang, Yujie Zhao, Bingjun Wang, Yiwen Luo
In covariate-adaptive or response-adaptive randomization, the treatment assignment and outcome can be correlated. Under this situation, the re-randomization test is a straightforward and attractive method to provide valid statistical inferences. In this paper, we investigate the number of repetitions in tests. This is motivated by a group sequential design in clinical trials, where the nominal significance bound can be very small at an interim analysis. Accordingly, re-randomization tests lead to a very large number of required repetitions, which may be computationally intractable. To reduce the number of repetitions, we propose an adaptive procedure and compare it with multiple approaches under predefined criteria. Monte Carlo simulations are conducted to show the performance of different approaches in a limited sample size. We also suggest strategies to reduce total computation time and provide practical guidance in preparing, executing, and reporting before and after data are unblinded at an interim analysis, so one can complete the computation within a reasonable time frame.
{"title":"Number of Repetitions in Re-Randomization Tests.","authors":"Yilong Zhang, Yujie Zhao, Bingjun Wang, Yiwen Luo","doi":"10.1002/pst.2438","DOIUrl":"10.1002/pst.2438","url":null,"abstract":"<p><p>In covariate-adaptive or response-adaptive randomization, the treatment assignment and outcome can be correlated. Under this situation, the re-randomization test is a straightforward and attractive method to provide valid statistical inferences. In this paper, we investigate the number of repetitions in tests. This is motivated by a group sequential design in clinical trials, where the nominal significance bound can be very small at an interim analysis. Accordingly, re-randomization tests lead to a very large number of required repetitions, which may be computationally intractable. To reduce the number of repetitions, we propose an adaptive procedure and compare it with multiple approaches under predefined criteria. Monte Carlo simulations are conducted to show the performance of different approaches in a limited sample size. We also suggest strategies to reduce total computation time and provide practical guidance in preparing, executing, and reporting before and after data are unblinded at an interim analysis, so one can complete the computation within a reasonable time frame.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2438"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142472207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}