Pub Date : 2025-01-01Epub Date: 2024-03-17DOI: 10.1002/pst.2379
Fasheng Li, Beverly Nickerson, Les Van Alstine, Ke Wang
In vitro dissolution testing is a regulatory required critical quality measure for solid dose pharmaceutical drug products. Setting the acceptance criteria to meet compendial criteria is required for a product to be filed and approved for marketing. Statistical approaches for analyzing dissolution data, setting specifications and visualizing results could vary according to product requirements, company's practices, and scientific judgements. This paper provides a general description of the steps taken in the evaluation and setting of in vitro dissolution specifications at release and on stability.
{"title":"Statistical approaches to evaluate in vitro dissolution data against proposed dissolution specifications.","authors":"Fasheng Li, Beverly Nickerson, Les Van Alstine, Ke Wang","doi":"10.1002/pst.2379","DOIUrl":"10.1002/pst.2379","url":null,"abstract":"<p><p>In vitro dissolution testing is a regulatory required critical quality measure for solid dose pharmaceutical drug products. Setting the acceptance criteria to meet compendial criteria is required for a product to be filed and approved for marketing. Statistical approaches for analyzing dissolution data, setting specifications and visualizing results could vary according to product requirements, company's practices, and scientific judgements. This paper provides a general description of the steps taken in the evaluation and setting of in vitro dissolution specifications at release and on stability.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2379"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140143994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-09-29DOI: 10.1002/pst.2437
Fenny Ong, Geert Molenberghs, Andrea Callegaro, Wim Van der Elst, Florian Stijven, Geert Verbeke, Ingrid Van Keilegom, Ariel Alonso
In a causal inference framework, a new metric has been proposed to quantify surrogacy for a continuous putative surrogate and a binary true endpoint, based on information theory. The proposed metric, termed the individual causal association (ICA), was quantified using a joint causal inference model for the corresponding potential outcomes. Due to the non-identifiability inherent in this type of models, a sensitivity analysis was introduced to study the behavior of the ICA as a function of the non-identifiable parameters characterizing the aforementioned model. In this scenario, to reduce uncertainty, several plausible yet untestable assumptions like monotonicity, independence, conditional independence or homogeneous variance-covariance, are often incorporated into the analysis. We assess the robustness of the methodology regarding these simplifying assumptions via simulation. The practical implications of the findings are demonstrated in the analysis of a randomized clinical trial evaluating an inactivated quadrivalent influenza vaccine.
{"title":"Assessing the Operational Characteristics of the Individual Causal Association as a Metric of Surrogacy in the Binary Continuous Setting.","authors":"Fenny Ong, Geert Molenberghs, Andrea Callegaro, Wim Van der Elst, Florian Stijven, Geert Verbeke, Ingrid Van Keilegom, Ariel Alonso","doi":"10.1002/pst.2437","DOIUrl":"10.1002/pst.2437","url":null,"abstract":"<p><p>In a causal inference framework, a new metric has been proposed to quantify surrogacy for a continuous putative surrogate and a binary true endpoint, based on information theory. The proposed metric, termed the individual causal association (ICA), was quantified using a joint causal inference model for the corresponding potential outcomes. Due to the non-identifiability inherent in this type of models, a sensitivity analysis was introduced to study the behavior of the ICA as a function of the non-identifiable parameters characterizing the aforementioned model. In this scenario, to reduce uncertainty, several plausible yet untestable assumptions like monotonicity, independence, conditional independence or homogeneous variance-covariance, are often incorporated into the analysis. We assess the robustness of the methodology regarding these simplifying assumptions via simulation. The practical implications of the findings are demonstrated in the analysis of a randomized clinical trial evaluating an inactivated quadrivalent influenza vaccine.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2437"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142351785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With contemporary anesthetic drugs, the efficacy of general anesthesia is assured. Health-economic and clinical objectives are related to reductions in the variability in dosing, variability in recovery, etc. Consequently, meta-analyses for anesthesiology research would benefit from quantification of ratios of standard deviations of log-normally distributed variables (e.g., surgical duration). Generalized confidence intervals can be used, once sample means and standard deviations in the raw, time, scale, for each study and group have been used to estimate the mean and standard deviation of the logarithms of the times (i.e., "log-scale"). We examine the matching of the first two moments versus also using higher-order terms, following Higgins et al. 2008 and Friedrich et al. 2012. Monte Carlo simulations revealed that using the first two moments 95% confidence intervals had coverage 92%-95%, with small bias. Use of higher-order moments worsened confidence interval coverage for the log ratios, especially for coefficients of variation in the time scale of 50% and for larger sample sizes per group, resulting in 88% coverage. We recommend that for calculating confidence intervals for ratios of standard deviations based on generalized pivotal quantities and log-normal distributions, when relying on transformation of sample statistics from time to log scale, use the first two moments, not the higher order terms.
{"title":"Taylor Series Approximation for Accurate Generalized Confidence Intervals of Ratios of Log-Normal Standard Deviations for Meta-Analysis Using Means and Standard Deviations in Time Scale.","authors":"Pei-Fu Chen, Franklin Dexter","doi":"10.1002/pst.2467","DOIUrl":"10.1002/pst.2467","url":null,"abstract":"<p><p>With contemporary anesthetic drugs, the efficacy of general anesthesia is assured. Health-economic and clinical objectives are related to reductions in the variability in dosing, variability in recovery, etc. Consequently, meta-analyses for anesthesiology research would benefit from quantification of ratios of standard deviations of log-normally distributed variables (e.g., surgical duration). Generalized confidence intervals can be used, once sample means and standard deviations in the raw, time, scale, for each study and group have been used to estimate the mean and standard deviation of the logarithms of the times (i.e., \"log-scale\"). We examine the matching of the first two moments versus also using higher-order terms, following Higgins et al. 2008 and Friedrich et al. 2012. Monte Carlo simulations revealed that using the first two moments 95% confidence intervals had coverage 92%-95%, with small bias. Use of higher-order moments worsened confidence interval coverage for the log ratios, especially for coefficients of variation in the time scale of 50% and for larger <math> <semantics> <mrow> <mfenced><mrow><mi>n</mi> <mo>=</mo> <mn>50</mn></mrow> </mfenced> </mrow> <annotation>$$ left(n=50right) $$</annotation></semantics> </math> sample sizes per group, resulting in 88% coverage. We recommend that for calculating confidence intervals for ratios of standard deviations based on generalized pivotal quantities and log-normal distributions, when relying on transformation of sample statistics from time to log scale, use the first two moments, not the higher order terms.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 1","pages":"e2467"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11755222/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-21DOI: 10.1002/pst.2428
Ludwig A Hothorn, Christian Ritz, Frank Schaarschmidt, Signe M Jensen, Robin Ristl
This tutorial describes single-step low-dimensional simultaneous inference with a focus on the availability of adjusted p values and compatible confidence intervals for more than just the usual mean value comparisons. The basic idea is, first, to use the influence of correlation on the quantile of the multivariate t-distribution: the higher the less conservative. In addition, second, the estimability of the correlation matrix using the multiple marginal models approach (mmm) using multiple models in the class of linear up to generalized linear mixed models. The underlying maxT-test using mmm is discussed by means of several real data scenarios using selected R packages. Surprisingly, different features are highlighted, among them: (i) analyzing different-scaled, correlated, multiple endpoints, (ii) analyzing multiple correlated binary endpoints, (iii) modeling dose as qualitative factor and/or quantitative covariate, (iv) joint consideration of several tuning parameters within the poly-k trend test, (v) joint testing of dose and time, (vi) considering several effect sizes, (vii) joint testing of subgroups and overall population in multiarm randomized clinical trials with correlated primary endpoints, (viii) multiple linear mixed effect models, (ix) generalized estimating equations, and (x) nonlinear regression models.
本教程介绍了单步低维同步推理,重点是调整后 p 值和兼容置信区间的可用性,而不仅仅是通常的均值比较。其基本思想是:首先,利用相关性对多元 t 分布的量值的影响:越高越不保守。此外,第二,使用多重边际模型方法(mmm),使用线性到广义线性混合模型类中的多重模型来估算相关矩阵的可估算性。使用选定的 R 软件包,通过几个真实数据场景讨论了使用 mmm 的基本 maxT 检验。令人惊讶的是,其中突出了不同的特点:(i) 分析不同尺度、相关的多个终点,(ii) 分析多个相关的二进制终点,(iii) 将剂量作为定性因子和/或定量协变量建模,(iv) 在 poly-k 趋势检验中联合考虑多个调整参数,(v) 联合检验剂量和时间、(viii) 多重线性混合效应模型;(ix) 广义估计方程;以及 (x) 非线性回归模型。
{"title":"Simultaneous Inference Using Multiple Marginal Models.","authors":"Ludwig A Hothorn, Christian Ritz, Frank Schaarschmidt, Signe M Jensen, Robin Ristl","doi":"10.1002/pst.2428","DOIUrl":"10.1002/pst.2428","url":null,"abstract":"<p><p>This tutorial describes single-step low-dimensional simultaneous inference with a focus on the availability of adjusted p values and compatible confidence intervals for more than just the usual mean value comparisons. The basic idea is, first, to use the influence of correlation on the quantile of the multivariate t-distribution: the higher the less conservative. In addition, second, the estimability of the correlation matrix using the multiple marginal models approach (mmm) using multiple models in the class of linear up to generalized linear mixed models. The underlying maxT-test using mmm is discussed by means of several real data scenarios using selected R packages. Surprisingly, different features are highlighted, among them: (i) analyzing different-scaled, correlated, multiple endpoints, (ii) analyzing multiple correlated binary endpoints, (iii) modeling dose as qualitative factor and/or quantitative covariate, (iv) joint consideration of several tuning parameters within the poly-k trend test, (v) joint testing of dose and time, (vi) considering several effect sizes, (vii) joint testing of subgroups and overall population in multiarm randomized clinical trials with correlated primary endpoints, (viii) multiple linear mixed effect models, (ix) generalized estimating equations, and (x) nonlinear regression models.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2428"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11788266/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142009206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-24DOI: 10.1002/pst.2434
Sami Leon, Elena Rantou, Jessica Kim, Sungwoo Choi, Nam Hee Choi
For topical, dermatological drug products, an in vitro option to determine bioequivalence (BE) between test and reference products is recommended. In particular, in vitro permeation test (IVPT) data analysis uses a reference-scaled approach for two primary endpoints, cumulative penetration amount (AMT) and maximum flux (J max), which takes the within donor variability into consideration. In 2022, the Food and Drug Administration (FDA) published a draft IVPT guidance that includes statistical analysis methods for both balanced and unbalanced cases of IVPT study data. This work presents a comprehensive evaluation of various methodologies used to estimate critical parameters essential in assessing BE. Specifically, we investigate the performance of the FDA draft IVPT guidance approach alongside alternative empirical and model-based methods utilizing mixed-effects models. Our analyses include both simulated scenarios and real-world studies. In simulated scenarios, empirical formulas consistently demonstrate robustness in approximating the true model, particularly in effectively addressing treatment-donor interactions. Conversely, the effectiveness of model-based approaches heavily relies on precise model selection, which significantly influences their results. The research emphasizes the importance of accurate model selection in model-based BE assessment methodologies. It sheds light on the advantages of empirical formulas, highlighting their reliability compared to model-based approaches and offers valuable implications for BE assessments. Our findings underscore the significance of robust methodologies and provide essential insights to advance their understanding and application in the assessment of BE, employed in IVPT data analysis.
对于外用皮肤病药物产品,建议采用体外方法来确定试验产品和参照产品之间的生物等效性(BE)。特别是,体外渗透试验(IVPT)数据分析对两个主要终点--累积渗透量(AMT)和最大通量(Jmax)--采用参考标度法,其中考虑了供体内部的变异性。2022 年,美国食品和药物管理局(FDA)发布了 IVPT 指南草案,其中包括 IVPT 研究数据平衡和非平衡情况的统计分析方法。这项工作全面评估了用于估算评估 BE 所必需的关键参数的各种方法。具体来说,我们研究了 FDA IVPT 指南草案方法的性能,以及利用混合效应模型的其他基于经验和模型的方法。我们的分析包括模拟情景和真实世界研究。在模拟场景中,经验公式在逼近真实模型方面始终表现出稳健性,尤其是在有效处理治疗-供体相互作用方面。相反,基于模型的方法的有效性在很大程度上依赖于精确的模型选择,这对其结果有很大影响。这项研究强调了在基于模型的生物多样性评估方法中准确选择模型的重要性。研究揭示了经验公式的优势,强调了与基于模型的方法相比,经验公式的可靠性,并为生物多样性评估提供了有价值的启示。我们的研究结果强调了稳健方法的重要性,并为在 IVPT 数据分析中使用的 BE 评估方法的理解和应用提供了重要启示。
{"title":"Comparative Analyses of Bioequivalence Assessment Methods for In Vitro Permeation Test Data.","authors":"Sami Leon, Elena Rantou, Jessica Kim, Sungwoo Choi, Nam Hee Choi","doi":"10.1002/pst.2434","DOIUrl":"10.1002/pst.2434","url":null,"abstract":"<p><p>For topical, dermatological drug products, an in vitro option to determine bioequivalence (BE) between test and reference products is recommended. In particular, in vitro permeation test (IVPT) data analysis uses a reference-scaled approach for two primary endpoints, cumulative penetration amount (AMT) and maximum flux (J <sub>max</sub>), which takes the within donor variability into consideration. In 2022, the Food and Drug Administration (FDA) published a draft IVPT guidance that includes statistical analysis methods for both balanced and unbalanced cases of IVPT study data. This work presents a comprehensive evaluation of various methodologies used to estimate critical parameters essential in assessing BE. Specifically, we investigate the performance of the FDA draft IVPT guidance approach alongside alternative empirical and model-based methods utilizing mixed-effects models. Our analyses include both simulated scenarios and real-world studies. In simulated scenarios, empirical formulas consistently demonstrate robustness in approximating the true model, particularly in effectively addressing treatment-donor interactions. Conversely, the effectiveness of model-based approaches heavily relies on precise model selection, which significantly influences their results. The research emphasizes the importance of accurate model selection in model-based BE assessment methodologies. It sheds light on the advantages of empirical formulas, highlighting their reliability compared to model-based approaches and offers valuable implications for BE assessments. Our findings underscore the significance of robust methodologies and provide essential insights to advance their understanding and application in the assessment of BE, employed in IVPT data analysis.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2434"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142047000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-05DOI: 10.1002/pst.2426
Lynne B Hare, Stan Altan, Hans Coppenolle
Mixture experimentation is commonly seen in pharmaceutical formulation studies, where the relative proportions of the individual components are modeled for effects on product attributes. The requirement that the sum of the component proportions equals 1 has given rise to the class of designs, known as mixture designs. The first mixture designs were published by Quenouille in 1953 but it took nearly 40 years for the earliest mixture design applications to be published in the pharmaceutical sciences literature by Kettaneh-Wold in 1991 and Waaler in 1992. Since then, the advent of efficient computer algorithms to generate designs has made this class of designs easily accessible to pharmaceutical statisticians, although the use of these designs appears to be an underutilized experimental strategy even today. One goal of this tutorial is to draw the attention of experimental statisticians to this class of designs and their advantages in pursuing formulation studies such as excipient compatibility studies. We present sufficient materials to introduce the novice practitioner to this class of design, associated models, and analysis strategies. An example of a mixture-process variable design is given as a case study.
{"title":"Mixture Experimentation in Pharmaceutical Formulations: A Tutorial.","authors":"Lynne B Hare, Stan Altan, Hans Coppenolle","doi":"10.1002/pst.2426","DOIUrl":"10.1002/pst.2426","url":null,"abstract":"<p><p>Mixture experimentation is commonly seen in pharmaceutical formulation studies, where the relative proportions of the individual components are modeled for effects on product attributes. The requirement that the sum of the component proportions equals 1 has given rise to the class of designs, known as mixture designs. The first mixture designs were published by Quenouille in 1953 but it took nearly 40 years for the earliest mixture design applications to be published in the pharmaceutical sciences literature by Kettaneh-Wold in 1991 and Waaler in 1992. Since then, the advent of efficient computer algorithms to generate designs has made this class of designs easily accessible to pharmaceutical statisticians, although the use of these designs appears to be an underutilized experimental strategy even today. One goal of this tutorial is to draw the attention of experimental statisticians to this class of designs and their advantages in pursuing formulation studies such as excipient compatibility studies. We present sufficient materials to introduce the novice practitioner to this class of design, associated models, and analysis strategies. An example of a mixture-process variable design is given as a case study.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2426"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141894062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-10-11DOI: 10.1002/pst.2442
Lei Yang, Feinan Lu
Several indices were suggested to determine the follow up duration in oncology trials from either maturity or stability perspective, by maximizing time such that the index was either greater or less than a pre-defined cutoff value. However, the selection of cutoff value was subjective and usually no commonly agreed cutoff value existed; sometimes one had to resort to simulations. To solve this problem, a new balance index was proposed, which integrated both data stability and data maturity. Its theoretical properties and relationships with other indices were investigated; then its performance was demonstrated through a case study. The highlights of the index are: (1) easy to calculate; (2) free of cutoff value selection; (3) generally consistent with the other indices while sometimes able to shorten the follow-up duration thus more flexible. For the cases where the new balance index cannot be calculated, a modified balance index was also proposed and discussed. For either single arm trial or randomized clinical trial, the two new balance indices can be implemented to widespread situations such as designing a new trial from scratch, or using aggregated trial information to inform the decision-making in the middle of trial conduct.
有人提出了几种指数,从成熟或稳定的角度来确定肿瘤试验的随访时间,方法是最大限度地延长时间 t $$ t $$,使指数大于或小于预先确定的临界值。然而,临界值的选择是主观的,通常不存在共同认可的临界值,有时不得不求助于模拟。为了解决这个问题,我们提出了一个新的平衡指数,它综合了数据稳定性和数据成熟度。我们研究了该指数的理论属性以及与其他指数的关系,然后通过案例研究证明了该指数的性能。该指数的亮点在于(1) 计算简便;(2) 无需选择临界值;(3) 与其他指数基本一致,有时还能缩短跟踪时间,因此更加灵活。对于无法计算新平衡指数的情况,还提出并讨论了修正平衡指数。对于单臂试验或随机临床试验,这两种新的平衡指数可广泛应用于各种情况,如从零开始设计新的试验,或在试验进行过程中利用汇总的试验信息为决策提供参考。
{"title":"Balance Index to Determine the Follow-Up Duration of Oncology Trials.","authors":"Lei Yang, Feinan Lu","doi":"10.1002/pst.2442","DOIUrl":"10.1002/pst.2442","url":null,"abstract":"<p><p>Several indices were suggested to determine the follow up duration in oncology trials from either maturity or stability perspective, by maximizing time <math> <semantics><mrow><mi>t</mi></mrow> </semantics> </math> such that the index was either greater or less than a pre-defined cutoff value. However, the selection of cutoff value was subjective and usually no commonly agreed cutoff value existed; sometimes one had to resort to simulations. To solve this problem, a new balance index was proposed, which integrated both data stability and data maturity. Its theoretical properties and relationships with other indices were investigated; then its performance was demonstrated through a case study. The highlights of the index are: (1) easy to calculate; (2) free of cutoff value selection; (3) generally consistent with the other indices while sometimes able to shorten the follow-up duration thus more flexible. For the cases where the new balance index cannot be calculated, a modified balance index was also proposed and discussed. For either single arm trial or randomized clinical trial, the two new balance indices can be implemented to widespread situations such as designing a new trial from scratch, or using aggregated trial information to inform the decision-making in the middle of trial conduct.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2442"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142400956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-07-08DOI: 10.1002/pst.2408
Hang Li, Tomasz M Witkos, Scott Umlauf, Christopher Thompson
During the drug development process, testing potency plays an important role in the quality assessment required for the manufacturing and marketing of biologics. Due to multiple operational and biological factors, higher variability is usually observed in bioassays compared with physicochemical methods. In this paper, we discuss different sources of bioassay variability and how this variability can be statistically estimated. In addition, we propose an algorithm to estimate the variability of reportable results associated with different numbers of runs and their corresponding OOS rates under a given specification. Numerical experiments are conducted on multiple assay formats to elucidate the empirical distribution of bioassay variability.
{"title":"Potency Assay Variability Estimation in Practice.","authors":"Hang Li, Tomasz M Witkos, Scott Umlauf, Christopher Thompson","doi":"10.1002/pst.2408","DOIUrl":"10.1002/pst.2408","url":null,"abstract":"<p><p>During the drug development process, testing potency plays an important role in the quality assessment required for the manufacturing and marketing of biologics. Due to multiple operational and biological factors, higher variability is usually observed in bioassays compared with physicochemical methods. In this paper, we discuss different sources of bioassay variability and how this variability can be statistically estimated. In addition, we propose an algorithm to estimate the variability of reportable results associated with different numbers of runs and their corresponding OOS rates under a given specification. Numerical experiments are conducted on multiple assay formats to elucidate the empirical distribution of bioassay variability.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2408"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11788244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141559471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-09-05DOI: 10.1002/pst.2436
Peter C Austin
A common feature in cohort studies is when there is a baseline measurement of the continuous follow-up or outcome variable. Common examples include baseline measurements of physiological characteristics such as blood pressure or heart rate in studies where the outcome is post-baseline measurement of the same variable. Methods incorporating the propensity score are increasingly being used to estimate the effects of treatments using observational studies. We examined six methods for incorporating the baseline value of the follow-up variable when using propensity score matching or weighting. These methods differed according to whether the baseline value of the follow-up variable was included or excluded from the propensity score model, whether subsequent regression adjustment was conducted in the matched or weighted sample to adjust for the baseline value of the follow-up variable, and whether the analysis estimated the effect of treatment on the follow-up variable or on the change from baseline. We used Monte Carlo simulations with 750 scenarios. While no analytic method had uniformly superior performance, we provide the following recommendations: first, when using weighting and the ATE is the target estimand, use an augmented inverse probability weighted estimator or include the baseline value of the follow-up variable in the propensity score model and subsequently adjust for the baseline value of the follow-up variable in a regression model. Second, when the ATT is the target estimand, regardless of whether using weighting or matching, analyze change from baseline using a propensity score that excludes the baseline value of the follow-up variable.
队列研究的一个共同特点是对连续随访变量或结果变量进行基线测量。常见的例子包括在研究中对血压或心率等生理特征进行基线测量,而结果则是对同一变量进行基线后测量。纳入倾向得分的方法越来越多地被用于利用观察性研究来估计治疗效果。我们研究了六种在使用倾向得分匹配或加权时纳入随访变量基线值的方法。这些方法的不同之处在于倾向得分模型中是否包含或排除了随访变量的基线值,是否在匹配样本或加权样本中进行了后续回归调整以调整随访变量的基线值,以及分析是否估算了治疗对随访变量或基线变化的影响。我们使用蒙特卡罗模拟法对 750 种情况进行了模拟。虽然没有哪种分析方法具有一致的优越性能,但我们还是提出了以下建议:首先,在使用加权法且 ATE 为目标估计值时,应使用增强的逆概率加权估计器,或在倾向评分模型中包含随访变量的基线值,然后在回归模型中对随访变量的基线值进行调整。其次,当 ATT 为目标估计值时,无论使用加权还是匹配,都应使用不包括随访变量基线值的倾向评分来分析与基线相比的变化。
{"title":"Propensity Score Analysis With Baseline and Follow-Up Measurements of the Outcome Variable.","authors":"Peter C Austin","doi":"10.1002/pst.2436","DOIUrl":"10.1002/pst.2436","url":null,"abstract":"<p><p>A common feature in cohort studies is when there is a baseline measurement of the continuous follow-up or outcome variable. Common examples include baseline measurements of physiological characteristics such as blood pressure or heart rate in studies where the outcome is post-baseline measurement of the same variable. Methods incorporating the propensity score are increasingly being used to estimate the effects of treatments using observational studies. We examined six methods for incorporating the baseline value of the follow-up variable when using propensity score matching or weighting. These methods differed according to whether the baseline value of the follow-up variable was included or excluded from the propensity score model, whether subsequent regression adjustment was conducted in the matched or weighted sample to adjust for the baseline value of the follow-up variable, and whether the analysis estimated the effect of treatment on the follow-up variable or on the change from baseline. We used Monte Carlo simulations with 750 scenarios. While no analytic method had uniformly superior performance, we provide the following recommendations: first, when using weighting and the ATE is the target estimand, use an augmented inverse probability weighted estimator or include the baseline value of the follow-up variable in the propensity score model and subsequently adjust for the baseline value of the follow-up variable in a regression model. Second, when the ATT is the target estimand, regardless of whether using weighting or matching, analyze change from baseline using a propensity score that excludes the baseline value of the follow-up variable.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2436"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11788469/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142140774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Jackson, Di Ran, Fanni Zhang, Mario Ouwens, Vitaly Druker, Michael Sweeting, Robert Hettle, Ian R White
Treatment switching is common in randomized trials of oncology treatments. For example, control group patients may receive the experimental treatment as a subsequent therapy. One possible estimand is the effect of trial treatment if this type of switching had instead not occurred. Two-stage estimation is an established approach for estimating this estimand. We argue that other estimands of interest instead describe the effect of trial treatments if the proportion of patients who switched was different. We give precise definitions of such estimands. By motivating estimands using real-world data, decision-making in universal health care systems is facilitated. Focusing on estimation, we show that an alternative choice of secondary baseline, the time of first subsequent treatment, is easily defined, and widely applicable, and makes alternative estimands amenable to two-stage estimation. We develop methodology using propensity scores, to adjust for confounding at a secondary baseline, and a new quantile matching technique that can be used to implement any parametric form of the post-secondary baseline survival model. Our methodology was motivated by a recent immuno-oncology trial where a substantial proportion of control group patients subsequently received a form of immunotherapy.
{"title":"New Methods for Two-Stage Treatment Switching Estimation.","authors":"Dan Jackson, Di Ran, Fanni Zhang, Mario Ouwens, Vitaly Druker, Michael Sweeting, Robert Hettle, Ian R White","doi":"10.1002/pst.2462","DOIUrl":"10.1002/pst.2462","url":null,"abstract":"<p><p>Treatment switching is common in randomized trials of oncology treatments. For example, control group patients may receive the experimental treatment as a subsequent therapy. One possible estimand is the effect of trial treatment if this type of switching had instead not occurred. Two-stage estimation is an established approach for estimating this estimand. We argue that other estimands of interest instead describe the effect of trial treatments if the proportion of patients who switched was different. We give precise definitions of such estimands. By motivating estimands using real-world data, decision-making in universal health care systems is facilitated. Focusing on estimation, we show that an alternative choice of secondary baseline, the time of first subsequent treatment, is easily defined, and widely applicable, and makes alternative estimands amenable to two-stage estimation. We develop methodology using propensity scores, to adjust for confounding at a secondary baseline, and a new quantile matching technique that can be used to implement any parametric form of the post-secondary baseline survival model. Our methodology was motivated by a recent immuno-oncology trial where a substantial proportion of control group patients subsequently received a form of immunotherapy.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 1","pages":"e2462"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11794985/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143189758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}