Pub Date : 2024-11-01Epub Date: 2024-07-10DOI: 10.1002/pst.2419
Marco Novelli, William F Rosenberger
Stratification on important variables is a common practice in clinical trials, since ensuring cosmetic balance on known baseline covariates is often deemed to be a crucial requirement for the credibility of the experimental results. However, the actual benefits of stratification are still debated in the literature. Other authors have shown that it does not improve efficiency in large samples and improves it only negligibly in smaller samples. This paper investigates different subgroup analysis strategies, with a particular focus on the potential benefits in terms of inferential precision of prestratification versus both poststratification and post hoc regression adjustment. For each of these approaches, the pros and cons of population-based versus randomization-based inference are discussed. The effects of the presence of a treatment-by-covariate interaction and the variability in the patient responses are also taken into account. Our results show that, in general, prestratifying does not provide substantial benefit. On the contrary, it may be deleterious, in particular for randomization-based procedures in the presence of a chronological bias. Even when there is treatment-by-covariate interaction, prestratification may backfire by considerably reducing the inferential precision.
{"title":"Exploring Stratification Strategies for Population- Versus Randomization-Based Inference.","authors":"Marco Novelli, William F Rosenberger","doi":"10.1002/pst.2419","DOIUrl":"10.1002/pst.2419","url":null,"abstract":"<p><p>Stratification on important variables is a common practice in clinical trials, since ensuring cosmetic balance on known baseline covariates is often deemed to be a crucial requirement for the credibility of the experimental results. However, the actual benefits of stratification are still debated in the literature. Other authors have shown that it does not improve efficiency in large samples and improves it only negligibly in smaller samples. This paper investigates different subgroup analysis strategies, with a particular focus on the potential benefits in terms of inferential precision of prestratification versus both poststratification and post hoc regression adjustment. For each of these approaches, the pros and cons of population-based versus randomization-based inference are discussed. The effects of the presence of a treatment-by-covariate interaction and the variability in the patient responses are also taken into account. Our results show that, in general, prestratifying does not provide substantial benefit. On the contrary, it may be deleterious, in particular for randomization-based procedures in the presence of a chronological bias. Even when there is treatment-by-covariate interaction, prestratification may backfire by considerably reducing the inferential precision.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1045-1058"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602877/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141580473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-07-28DOI: 10.1002/pst.2425
Benjamin F Hartley, Dave Lunn, Adrian P Mander
Correctly characterising the dose-response relationship and taking the correct dose forward for further study is a critical part of the drug development process. We use optimal design theory to compare different designs and show that using longitudinal data from all available timepoints in a continuous-time dose-response model can substantially increase the efficiency of estimation of the dose-response compared to a single timepoint model. We give theoretical results to calculate the efficiency gains for a large class of these models. For example, a linearly growing Emax dose-response in a population with a between/within-patient variance ratio ranging from 0.1 to 1 measured at six visits can be estimated with between 1.43 and 2.22 times relative efficiency gain, or equivalently, with 30% to a 55% reduced sample size, compared to a single model of the final timepoint. Fractional polynomials are a flexible way to incorporate data from repeated measurements, increasing precision without imposing strong constraints. Longitudinal dose-response models using two fractional polynomial terms are robust to mis-specification of the true longitudinal process while maintaining, often large, efficiency gains. These models have applications for characterising the dose-response at interim or final analyses.
{"title":"Efficient Study Design and Analysis of Longitudinal Dose-Response Data Using Fractional Polynomials.","authors":"Benjamin F Hartley, Dave Lunn, Adrian P Mander","doi":"10.1002/pst.2425","DOIUrl":"10.1002/pst.2425","url":null,"abstract":"<p><p>Correctly characterising the dose-response relationship and taking the correct dose forward for further study is a critical part of the drug development process. We use optimal design theory to compare different designs and show that using longitudinal data from all available timepoints in a continuous-time dose-response model can substantially increase the efficiency of estimation of the dose-response compared to a single timepoint model. We give theoretical results to calculate the efficiency gains for a large class of these models. For example, a linearly growing Emax dose-response in a population with a between/within-patient variance ratio ranging from 0.1 to 1 measured at six visits can be estimated with between 1.43 and 2.22 times relative efficiency gain, or equivalently, with 30% to a 55% reduced sample size, compared to a single model of the final timepoint. Fractional polynomials are a flexible way to incorporate data from repeated measurements, increasing precision without imposing strong constraints. Longitudinal dose-response models using two fractional polynomial terms are robust to mis-specification of the true longitudinal process while maintaining, often large, efficiency gains. These models have applications for characterising the dose-response at interim or final analyses.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1128-1143"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141788779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-25DOI: 10.1002/pst.2406
Paul Faya, Tianhui Zhang
Recombinant adeno-associated virus (AAV) has become a popular platform for many gene therapy applications. The strength of AAV-based products is a critical quality attribute that affects the efficacy of the drug and is measured as the concentration of vector genomes, or physical titer. Because the dosing of patients is based on the titer measurement, it is critical for manufacturers to ensure that the measured titer of the drug product is close to the actual concentration of the batch. Historically, dosing calculations have been performed using the measured titer, which is reported on the drug product label. However, due to recent regulatory guidance, sponsors are now expected to label the drug product with nominal or "target" titer. This new expectation for gene therapy products can pose a challenge in the presence of process and analytical variability. In particular, the manufacturer must decide if a dilution of the drug substance is warranted at the drug product stage to bring the strength in line with the nominal value. In this paper, we present two straightforward statistical methods to aid the manufacturer in the dilution decision. These approaches use the understanding of process and analytical variability to compute probabilities of achieving the desired drug product titer. We also provide an approach for determining an optimal assay replication strategy for achieving the desired probability of meeting drug product release specifications.
{"title":"To Dilute or Not to Dilute: Nominal Titer Dosing for Genetic Medicines.","authors":"Paul Faya, Tianhui Zhang","doi":"10.1002/pst.2406","DOIUrl":"10.1002/pst.2406","url":null,"abstract":"<p><p>Recombinant adeno-associated virus (AAV) has become a popular platform for many gene therapy applications. The strength of AAV-based products is a critical quality attribute that affects the efficacy of the drug and is measured as the concentration of vector genomes, or physical titer. Because the dosing of patients is based on the titer measurement, it is critical for manufacturers to ensure that the measured titer of the drug product is close to the actual concentration of the batch. Historically, dosing calculations have been performed using the measured titer, which is reported on the drug product label. However, due to recent regulatory guidance, sponsors are now expected to label the drug product with nominal or \"target\" titer. This new expectation for gene therapy products can pose a challenge in the presence of process and analytical variability. In particular, the manufacturer must decide if a dilution of the drug substance is warranted at the drug product stage to bring the strength in line with the nominal value. In this paper, we present two straightforward statistical methods to aid the manufacturer in the dilution decision. These approaches use the understanding of process and analytical variability to compute probabilities of achieving the desired drug product titer. We also provide an approach for determining an optimal assay replication strategy for achieving the desired probability of meeting drug product release specifications.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"939-944"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141458678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Delayed outcome is common in phase I oncology clinical trials. It causes logistic difficulty, wastes resources, and prolongs the trial duration. This article investigates this issue and proposes the time-to-event 3 + 3 (T3 + 3) design, which utilizes the actual follow-up time for at-risk patients with pending toxicity outcomes. The T3 + 3 design allows continuous accrual without unnecessary trial suspension and is costless and implementable with pretabulated dose decision rules. Besides, the T3 + 3 design uses the isotonic regression to estimate the toxicity rates across dose levels and therefore can accommodate for any targeted toxicity rate for maximum tolerated dose (MTD). It dramatically facilitates the trial preparation and conduct without intensive computation and statistical consultation. The extension to other algorithm-based phase I dose-finding designs (e.g., i3 + 3 design) is also studied. Comprehensive computer simulation studies are conducted to investigate the performance of the T3 + 3 design under various dose-toxicity scenarios. The results confirm that the T3 + 3 design substantially shortens the trial duration compared with the conventional 3 + 3 design and yields much higher accuracy in MTD identification than the rolling six design. In summary, the T3 + 3 design addresses the delayed outcome issue while keeping the desirable features of the 3 + 3 design, such as simplicity, transparency, and costless implementation. It has great potential to accelerate early-phase drug development.
{"title":"T3 + 3: 3 + 3 Design With Delayed Outcomes.","authors":"Jiaying Guo, Mengyi Lu, Isabella Wan, Yumin Wang, Leng Han, Yong Zang","doi":"10.1002/pst.2414","DOIUrl":"10.1002/pst.2414","url":null,"abstract":"<p><p>Delayed outcome is common in phase I oncology clinical trials. It causes logistic difficulty, wastes resources, and prolongs the trial duration. This article investigates this issue and proposes the time-to-event 3 + 3 (T3 + 3) design, which utilizes the actual follow-up time for at-risk patients with pending toxicity outcomes. The T3 + 3 design allows continuous accrual without unnecessary trial suspension and is costless and implementable with pretabulated dose decision rules. Besides, the T3 + 3 design uses the isotonic regression to estimate the toxicity rates across dose levels and therefore can accommodate for any targeted toxicity rate for maximum tolerated dose (MTD). It dramatically facilitates the trial preparation and conduct without intensive computation and statistical consultation. The extension to other algorithm-based phase I dose-finding designs (e.g., i3 + 3 design) is also studied. Comprehensive computer simulation studies are conducted to investigate the performance of the T3 + 3 design under various dose-toxicity scenarios. The results confirm that the T3 + 3 design substantially shortens the trial duration compared with the conventional 3 + 3 design and yields much higher accuracy in MTD identification than the rolling six design. In summary, the T3 + 3 design addresses the delayed outcome issue while keeping the desirable features of the 3 + 3 design, such as simplicity, transparency, and costless implementation. It has great potential to accelerate early-phase drug development.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"959-970"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602947/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141458677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-08-09DOI: 10.1002/pst.2417
Yeonhee Park, Won Chang
Dose-finding studies play a crucial role in drug development by identifying the optimal dose(s) for later studies while considering tolerability. This not only saves time and effort in proceeding with Phase III trials but also improves efficacy. In an era of precision medicine, it is not ideal to assume patient homogeneity in dose-finding studies as patients may respond differently to the drug. To address this, we propose a personalized dose-finding algorithm that assigns patients to individualized optimal biological doses. Our design follows a two-stage approach. Initially, patients are enrolled under broad eligibility criteria. Based on the Stage 1 data, we fit a regression model of toxicity and efficacy outcomes on dose and biomarkers to characterize treatment-sensitive patients. In the second stage, we restrict the trial population to sensitive patients, apply a personalized dose allocation algorithm, and choose the recommended dose at the end of the trial. Simulation study shows that the proposed design reliably enriches the trial population, minimizes the number of failures, and yields superior operating characteristics compared to several existing dose-finding designs in terms of both the percentage of correct selection and the number of patients treated at target dose(s).
剂量摸底研究在药物研发中发挥着至关重要的作用,它可以在考虑耐受性的同时,为后续研究确定最佳剂量。这不仅能节省进行 III 期试验的时间和精力,还能提高疗效。在精准医疗时代,在剂量探索研究中假定患者具有同质性并不理想,因为患者可能对药物产生不同的反应。为了解决这个问题,我们提出了一种个性化剂量寻找算法,为患者分配个性化的最佳生物剂量。我们的设计采用两阶段方法。首先,根据广泛的资格标准招募患者。在第一阶段数据的基础上,我们根据剂量和生物标志物拟合了毒性和疗效结果的回归模型,以确定对治疗敏感的患者的特征。在第二阶段,我们将试验人群限定为敏感患者,应用个性化剂量分配算法,并在试验结束时选择推荐剂量。模拟研究表明,与现有的几种剂量寻找设计相比,所提出的设计能可靠地丰富试验人群,最大限度地减少失败次数,并在正确选择的百分比和按目标剂量治疗的患者人数方面产生更优越的操作特性。
{"title":"A Personalized Dose-Finding Algorithm Based on Adaptive Gaussian Process Regression.","authors":"Yeonhee Park, Won Chang","doi":"10.1002/pst.2417","DOIUrl":"10.1002/pst.2417","url":null,"abstract":"<p><p>Dose-finding studies play a crucial role in drug development by identifying the optimal dose(s) for later studies while considering tolerability. This not only saves time and effort in proceeding with Phase III trials but also improves efficacy. In an era of precision medicine, it is not ideal to assume patient homogeneity in dose-finding studies as patients may respond differently to the drug. To address this, we propose a personalized dose-finding algorithm that assigns patients to individualized optimal biological doses. Our design follows a two-stage approach. Initially, patients are enrolled under broad eligibility criteria. Based on the Stage 1 data, we fit a regression model of toxicity and efficacy outcomes on dose and biomarkers to characterize treatment-sensitive patients. In the second stage, we restrict the trial population to sensitive patients, apply a personalized dose allocation algorithm, and choose the recommended dose at the end of the trial. Simulation study shows that the proposed design reliably enriches the trial population, minimizes the number of failures, and yields superior operating characteristics compared to several existing dose-finding designs in terms of both the percentage of correct selection and the number of patients treated at target dose(s).</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1181-1205"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602944/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141907379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-08-08DOI: 10.1002/pst.2433
Helle Lynggaard, Oliver N Keene, Tobias Mütze, Sunita Rehal
Most published applications of the estimand framework have focused on superiority trials. However, non-inferiority trials present specific challenges compared to superiority trials. The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use notes in their addendum on estimands and sensitivity analysis in clinical trials that there may be special considerations to the implementation of estimands in clinical trials with a non-inferiority objective yet provides little guidance. This paper discusses considerations that trial teams should make when defining estimands for a clinical trial with a non-inferiority objective. We discuss how the pre-addendum way of establishing non-inferiority can be embraced by the estimand framework including a discussion of the role of the Per Protocol analysis set. We examine what clinical questions of interest can be formulated in the context of non-inferiority trials and outline why we do not think it is sensible to describe an estimand as 'conservative'. The impact of the estimand framework on key considerations in non-inferiority trials such as whether trials should have more than one primary estimand, the choice of non-inferiority margin, assay sensitivity, switching from non-inferiority to superiority and estimation are discussed. We conclude by providing a list of recommendations, and important considerations for defining estimands for trials with a non-inferiority objective.
{"title":"Applying the Estimand Framework to Non‐Inferiority Trials.","authors":"Helle Lynggaard, Oliver N Keene, Tobias Mütze, Sunita Rehal","doi":"10.1002/pst.2433","DOIUrl":"10.1002/pst.2433","url":null,"abstract":"<p><p>Most published applications of the estimand framework have focused on superiority trials. However, non-inferiority trials present specific challenges compared to superiority trials. The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use notes in their addendum on estimands and sensitivity analysis in clinical trials that there may be special considerations to the implementation of estimands in clinical trials with a non-inferiority objective yet provides little guidance. This paper discusses considerations that trial teams should make when defining estimands for a clinical trial with a non-inferiority objective. We discuss how the pre-addendum way of establishing non-inferiority can be embraced by the estimand framework including a discussion of the role of the Per Protocol analysis set. We examine what clinical questions of interest can be formulated in the context of non-inferiority trials and outline why we do not think it is sensible to describe an estimand as 'conservative'. The impact of the estimand framework on key considerations in non-inferiority trials such as whether trials should have more than one primary estimand, the choice of non-inferiority margin, assay sensitivity, switching from non-inferiority to superiority and estimation are discussed. We conclude by providing a list of recommendations, and important considerations for defining estimands for trials with a non-inferiority objective.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1156-1165"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141902593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander C Cambon, James Travis, Liping Sun, Jada Idokogi, Anna Kettermann
An informed estimate of subject-level variance is a key determinate for accurate estimation of the required sample size for clinical trials. Evaluating completed adult Type 2 diabetes studies submitted to the FDA for accuracy of the variance estimate at the planning stage provides insights to inform the sample size requirements for future studies. From the U.S. Food and Drug Administration (FDA) database of new drug applications containing 14,106 subjects from 26 phase 3 randomized studies submitted to the FDA in support of drug approvals in adult type 2 diabetes studies reviewed between 2013 and 2017, we obtained estimates of subject-level variance for the primary endpoint-change in glycated hemoglobin (HbA1c) from baseline to 6 months. In addition, we used nine additional studies to examine the impact of clinically meaningful covariates on residual standard deviation and sample size re-estimation. Our analyses show that reduced sample sizes can be used without interfering with the validity of efficacy results for adult type 2 diabetes drug trials. This finding has implications for future research involving the adult type 2 diabetes population, including the potential to reduce recruitment period length and improve the timeliness of results. Furthermore, our findings could be utilized in the design of future endocrinology clinical trials.
{"title":"Optimizing Sample Size Determinations for Phase 3 Clinical Trials in Type 2 Diabetes.","authors":"Alexander C Cambon, James Travis, Liping Sun, Jada Idokogi, Anna Kettermann","doi":"10.1002/pst.2446","DOIUrl":"https://doi.org/10.1002/pst.2446","url":null,"abstract":"<p><p>An informed estimate of subject-level variance is a key determinate for accurate estimation of the required sample size for clinical trials. Evaluating completed adult Type 2 diabetes studies submitted to the FDA for accuracy of the variance estimate at the planning stage provides insights to inform the sample size requirements for future studies. From the U.S. Food and Drug Administration (FDA) database of new drug applications containing 14,106 subjects from 26 phase 3 randomized studies submitted to the FDA in support of drug approvals in adult type 2 diabetes studies reviewed between 2013 and 2017, we obtained estimates of subject-level variance for the primary endpoint-change in glycated hemoglobin (HbA1c) from baseline to 6 months. In addition, we used nine additional studies to examine the impact of clinically meaningful covariates on residual standard deviation and sample size re-estimation. Our analyses show that reduced sample sizes can be used without interfering with the validity of efficacy results for adult type 2 diabetes drug trials. This finding has implications for future research involving the adult type 2 diabetes population, including the potential to reduce recruitment period length and improve the timeliness of results. Furthermore, our findings could be utilized in the design of future endocrinology clinical trials.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142546679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Max Menssen, Martina Dammann, Firas Fneish, David Ellenberger, Frank Schaarschmidt
In pre-clinical and medical quality control, it is of interest to assess the stability of the process under monitoring or to validate a current observation using historical control data. Classically, this is done by the application of historical control limits (HCL) graphically displayed in control charts. In many applications, HCL are applied to count data, for example, the number of revertant colonies (Ames assay) or the number of relapses per multiple sclerosis patient. Count data may be overdispersed, can be heavily right-skewed and clusters may differ in cluster size or other baseline quantities (e.g., number of petri dishes per control group or different length of monitoring times per patient). Based on the quasi-Poisson assumption or the negative-binomial distribution, we propose prediction intervals for overdispersed count data to be used as HCL. Variable baseline quantities are accounted for by offsets. Furthermore, we provide a bootstrap calibration algorithm that accounts for the skewed distribution and achieves equal tail probabilities. Comprehensive Monte-Carlo simulations assessing the coverage probabilities of eight different methods for HCL calculation reveal, that the bootstrap calibrated prediction intervals control the type-1-error best. Heuristics traditionally used in control charts (e.g., the limits in Shewhart c- or u-charts or the mean ± 2 SD) fail to control a pre-specified coverage probability. The application of HCL is demonstrated based on data from the Ames assay and for numbers of relapses of multiple sclerosis patients. The proposed prediction intervals and the algorithm for bootstrap calibration are publicly available via the R package predint.
{"title":"Prediction Intervals for Overdispersed Poisson Data and Their Application in Medical and Pre-Clinical Quality Control.","authors":"Max Menssen, Martina Dammann, Firas Fneish, David Ellenberger, Frank Schaarschmidt","doi":"10.1002/pst.2447","DOIUrl":"https://doi.org/10.1002/pst.2447","url":null,"abstract":"<p><p>In pre-clinical and medical quality control, it is of interest to assess the stability of the process under monitoring or to validate a current observation using historical control data. Classically, this is done by the application of historical control limits (HCL) graphically displayed in control charts. In many applications, HCL are applied to count data, for example, the number of revertant colonies (Ames assay) or the number of relapses per multiple sclerosis patient. Count data may be overdispersed, can be heavily right-skewed and clusters may differ in cluster size or other baseline quantities (e.g., number of petri dishes per control group or different length of monitoring times per patient). Based on the quasi-Poisson assumption or the negative-binomial distribution, we propose prediction intervals for overdispersed count data to be used as HCL. Variable baseline quantities are accounted for by offsets. Furthermore, we provide a bootstrap calibration algorithm that accounts for the skewed distribution and achieves equal tail probabilities. Comprehensive Monte-Carlo simulations assessing the coverage probabilities of eight different methods for HCL calculation reveal, that the bootstrap calibrated prediction intervals control the type-1-error best. Heuristics traditionally used in control charts (e.g., the limits in Shewhart c- or u-charts or the mean ± 2 SD) fail to control a pre-specified coverage probability. The application of HCL is demonstrated based on data from the Ames assay and for numbers of relapses of multiple sclerosis patients. The proposed prediction intervals and the algorithm for bootstrap calibration are publicly available via the R package predint.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142546680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
'Treatment effect measures under nonproportional hazards' by Snapinn et al. (Pharmaceutical Statistics, 22, 181-193) recently proposed some novel estimates of treatment effect for time-to-event endpoints. In this note, we clarify three points related to the proposed estimators that help to elucidate their properties. We hope that their work, and this commentary, will motivate further discussion concerning treatment effect measures that do not require the proportional hazards assumption.
{"title":"Treatment Effect Measures Under Nonproportional Hazards.","authors":"Dan Jackson, Michael Sweeting, Rose Baker","doi":"10.1002/pst.2449","DOIUrl":"https://doi.org/10.1002/pst.2449","url":null,"abstract":"<p><p>'Treatment effect measures under nonproportional hazards' by Snapinn et al. (Pharmaceutical Statistics, 22, 181-193) recently proposed some novel estimates of treatment effect for time-to-event endpoints. In this note, we clarify three points related to the proposed estimators that help to elucidate their properties. We hope that their work, and this commentary, will motivate further discussion concerning treatment effect measures that do not require the proportional hazards assumption.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142505738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Study designs incorporate interim analyses to allow for modifications to the trial design. These analyses may aid decisions regarding sample size, futility, and safety. Furthermore, they may provide evidence about potential differences between treatment arms. Bayesian response adaptive randomization (RAR) skews allocation proportions such that fewer participants are assigned to the inferior treatments. However, these allocation changes may introduce covariate imbalances. We discuss two versions of Bayesian RAR (with and without covariate adjustment for a binary covariate) for continuous outcomes analyzed using change scores and repeated measures, while considering either regression or mixed models for interim analysis modeling. Through simulation studies, we show that RAR (both versions) allocates more participants to better treatments compared to equal randomization, while reducing potential covariate imbalances. We also show that dynamic allocation using mixed models for repeated measures yields a smaller allocation proportion variance while having a similar covariate imbalance as regression models. Additionally, covariate imbalance was smallest for methods using covariate-adjusted RAR (CARA) in scenarios with small sample sizes and covariate prevalence less than 0.3. Covariate imbalance did not differ between RAR and CARA in simulations with larger sample sizes and higher covariate prevalence. We thus recommend a CARA approach for small pilot/exploratory studies for the identification of candidate treatments for further confirmatory studies.
研究设计包括中期分析,以便修改试验设计。这些分析可能有助于决定样本大小、无效性和安全性。此外,这些分析还可以为治疗臂之间的潜在差异提供证据。贝叶斯反应自适应随机化(RAR)会调整分配比例,使较少的参与者被分配到较差的治疗方案中。然而,这些分配变化可能会带来协变量不平衡。我们讨论了贝叶斯 RAR 的两个版本(对二元协变量进行协变量调整和不进行协变量调整),适用于使用变化评分和重复测量进行分析的连续结果,同时考虑使用回归模型或混合模型进行中期分析建模。通过模拟研究,我们发现与平等随机化相比,RAR(两种版本)能将更多参与者分配到更好的治疗中,同时减少潜在的协变量不平衡。我们还表明,使用重复测量混合模型进行动态分配可获得较小的分配比例方差,同时具有与回归模型类似的协变量不平衡。此外,在样本量较小且协变量流行率小于 0.3 的情况下,使用协变量调整 RAR(CARA)的方法的协变量不平衡最小。在样本量较大、共变因素流行率较高的模拟中,RAR 和 CARA 的共变因素不平衡性没有差异。因此,我们建议在小型试点/探索性研究中采用 CARA 方法,以确定候选治疗方法,供进一步的确证研究使用。
{"title":"Bayesian Response Adaptive Randomization for Randomized Clinical Trials With Continuous Outcomes: The Role of Covariate Adjustment.","authors":"Vahan Aslanyan, Trevor Pickering, Michelle Nuño, Lindsay A Renfro, Judy Pa, Wendy J Mack","doi":"10.1002/pst.2443","DOIUrl":"https://doi.org/10.1002/pst.2443","url":null,"abstract":"<p><p>Study designs incorporate interim analyses to allow for modifications to the trial design. These analyses may aid decisions regarding sample size, futility, and safety. Furthermore, they may provide evidence about potential differences between treatment arms. Bayesian response adaptive randomization (RAR) skews allocation proportions such that fewer participants are assigned to the inferior treatments. However, these allocation changes may introduce covariate imbalances. We discuss two versions of Bayesian RAR (with and without covariate adjustment for a binary covariate) for continuous outcomes analyzed using change scores and repeated measures, while considering either regression or mixed models for interim analysis modeling. Through simulation studies, we show that RAR (both versions) allocates more participants to better treatments compared to equal randomization, while reducing potential covariate imbalances. We also show that dynamic allocation using mixed models for repeated measures yields a smaller allocation proportion variance while having a similar covariate imbalance as regression models. Additionally, covariate imbalance was smallest for methods using covariate-adjusted RAR (CARA) in scenarios with small sample sizes and covariate prevalence less than 0.3. Covariate imbalance did not differ between RAR and CARA in simulations with larger sample sizes and higher covariate prevalence. We thus recommend a CARA approach for small pilot/exploratory studies for the identification of candidate treatments for further confirmatory studies.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142505735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}