Pub Date : 2024-01-01Epub Date: 2023-10-02DOI: 10.1002/pst.2340
Hans-Jochen Weber, Stephen Corson, Jiang Li, François Mercier, Satrajit Roychoudhury, Martin Oliver Sailer, Steven Sun, Alexander Todd, Godwin Yung
Duration of response (DOR) and time to response (TTR) are typically evaluated as secondary endpoints in early-stage clinical studies in oncology when efficacy is assessed by the best overall response and presented as the overall response rate. Despite common use of DOR and TTR in particular in single-arm studies, the definition of these endpoints and the questions they are intended to answer remain unclear. Motivated by the estimand framework, we present relevant scientific questions of interest for DOR and TTR and propose corresponding estimand definitions. We elaborate on how to deal with relevant intercurrent events which should follow the same considerations as implemented for the primary response estimand. A case study in mantle cell lymphoma illustrates the implementation of relevant estimands of DOR and TTR. We close the paper with practical recommendations to implement DOR and TTR in clinical study protocols.
{"title":"Duration of and time to response in oncology clinical trials from the perspective of the estimand framework.","authors":"Hans-Jochen Weber, Stephen Corson, Jiang Li, François Mercier, Satrajit Roychoudhury, Martin Oliver Sailer, Steven Sun, Alexander Todd, Godwin Yung","doi":"10.1002/pst.2340","DOIUrl":"10.1002/pst.2340","url":null,"abstract":"<p><p>Duration of response (DOR) and time to response (TTR) are typically evaluated as secondary endpoints in early-stage clinical studies in oncology when efficacy is assessed by the best overall response and presented as the overall response rate. Despite common use of DOR and TTR in particular in single-arm studies, the definition of these endpoints and the questions they are intended to answer remain unclear. Motivated by the estimand framework, we present relevant scientific questions of interest for DOR and TTR and propose corresponding estimand definitions. We elaborate on how to deal with relevant intercurrent events which should follow the same considerations as implemented for the primary response estimand. A case study in mantle cell lymphoma illustrates the implementation of relevant estimands of DOR and TTR. We close the paper with practical recommendations to implement DOR and TTR in clinical study protocols.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41101897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2023-09-07DOI: 10.1002/pst.2337
Gosuke Homma, Takuma Yoshida
Count outcomes are collected in clinical trials for new drug development in several therapeutic areas and the event rate is commonly used as a single primary endpoint. Count outcomes that are greater than the mean value are termed overdispersion; thus, count outcomes are assumed to have a negative binomial distribution. However, in clinical trials for treating asthma and chronic obstructive pulmonary disease (COPD), a regulatory agency has suggested that a continuous endpoint related to lung function must be evaluated as a primary endpoint in addition to the event rate. The two co-primary endpoints that need to be evaluated include overdispersed count and continuous outcomes. Some researchers have proposed sample size calculation methods in the context of co-primary endpoints for various outcome types. However, methodologies for sample size calculation in trials with two co-primary endpoints, including overdispersed count and continuous outcomes, required when planning clinical trials for treating asthma and COPD, remain to be proposed. In this study, we aimed to develop a hypothesis-testing method and a corresponding sample size calculation method with two co-primary endpoints including overdispersed count and continuous outcomes. In a simulation, we demonstrated that the proposed sample size calculation method has adequate power accuracy. In addition, we illustrated an application of the proposed sample size calculation method to a placebo-controlled Phase 3 trial for patients with COPD.
{"title":"Sample size calculation in clinical trials with two co-primary endpoints including overdispersed count and continuous outcomes.","authors":"Gosuke Homma, Takuma Yoshida","doi":"10.1002/pst.2337","DOIUrl":"10.1002/pst.2337","url":null,"abstract":"<p><p>Count outcomes are collected in clinical trials for new drug development in several therapeutic areas and the event rate is commonly used as a single primary endpoint. Count outcomes that are greater than the mean value are termed overdispersion; thus, count outcomes are assumed to have a negative binomial distribution. However, in clinical trials for treating asthma and chronic obstructive pulmonary disease (COPD), a regulatory agency has suggested that a continuous endpoint related to lung function must be evaluated as a primary endpoint in addition to the event rate. The two co-primary endpoints that need to be evaluated include overdispersed count and continuous outcomes. Some researchers have proposed sample size calculation methods in the context of co-primary endpoints for various outcome types. However, methodologies for sample size calculation in trials with two co-primary endpoints, including overdispersed count and continuous outcomes, required when planning clinical trials for treating asthma and COPD, remain to be proposed. In this study, we aimed to develop a hypothesis-testing method and a corresponding sample size calculation method with two co-primary endpoints including overdispersed count and continuous outcomes. In a simulation, we demonstrated that the proposed sample size calculation method has adequate power accuracy. In addition, we illustrated an application of the proposed sample size calculation method to a placebo-controlled Phase 3 trial for patients with COPD.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48018074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2023-09-11DOI: 10.1002/pst.2335
Olympia Papachristofi, Björn Bornkamp, Melanie Wright, Tim Friede
Adaptive seamless trial designs, combining the learning and confirming cycles of drug development in a single trial, have gained popularity in recent years. Adaptations may include dose selection, sample size re-estimation and enrichment of the study population. Despite methodological advances and recognition of the potential efficiency gains such designs offer, their implementation, including how to enable efficient decision making on the adaptations in interim analyzes, remains a key challenge in their adoption. This manuscript uses a case study of an adaptive seamless proof-of-concept (Phase 2a)/dose-finding (Phase 2b) to showcase potential adaptive features that can be implemented in trial designs at earlier development stages and the role of simulations in assessing the design operating characteristics and specifying the decision rules for the adaptations. It further outlines the elements needed to support successful interim analysis decision making on the adaptations while safeguarding study integrity, including the role of different stakeholders, interactive simulation-based tools to facilitate decision making and operational aspects requiring preplanning. The benefits of the adaptive Phase 2a/2b design chosen compared to following the traditional two separate studies (2a and 2b) paradigm are discussed. With careful planning and appreciation of their complexity and components needed for their implementation, seamless adaptive designs have the potential to yield significant savings both in terms of time and resources.
{"title":"Interim decision making in seamless trial designs: An application in an adaptive dose-finding study in a rare kidney disease.","authors":"Olympia Papachristofi, Björn Bornkamp, Melanie Wright, Tim Friede","doi":"10.1002/pst.2335","DOIUrl":"10.1002/pst.2335","url":null,"abstract":"<p><p>Adaptive seamless trial designs, combining the learning and confirming cycles of drug development in a single trial, have gained popularity in recent years. Adaptations may include dose selection, sample size re-estimation and enrichment of the study population. Despite methodological advances and recognition of the potential efficiency gains such designs offer, their implementation, including how to enable efficient decision making on the adaptations in interim analyzes, remains a key challenge in their adoption. This manuscript uses a case study of an adaptive seamless proof-of-concept (Phase 2a)/dose-finding (Phase 2b) to showcase potential adaptive features that can be implemented in trial designs at earlier development stages and the role of simulations in assessing the design operating characteristics and specifying the decision rules for the adaptations. It further outlines the elements needed to support successful interim analysis decision making on the adaptations while safeguarding study integrity, including the role of different stakeholders, interactive simulation-based tools to facilitate decision making and operational aspects requiring preplanning. The benefits of the adaptive Phase 2a/2b design chosen compared to following the traditional two separate studies (2a and 2b) paradigm are discussed. With careful planning and appreciation of their complexity and components needed for their implementation, seamless adaptive designs have the potential to yield significant savings both in terms of time and resources.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10201773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2023-09-26DOI: 10.1002/pst.2339
Fabio Rigat
Prior probabilities of clinical hypotheses are not systematically used for clinical trial design yet, due to a concern that poor priors may lead to poor decisions. To address this concern, a conservative approach to Bayesian trial design is illustrated here, requiring that the operational characteristics of the primary trial outcome are stronger than the prior. This approach is complementary to current Bayesian design methods, in that it insures against prior-data conflict by defining a sample size commensurate to a discrete design prior. This approach is ethical, in that it requires designs appropriate to achieving pre-specified levels of clinical equipoise imbalance. Practical examples are discussed, illustrating design of trials with binary or time to event endpoints. Moderate increases in phase II study sample size are shown to deliver strong levels of overall evidence for go/no-go clinical development decisions. Levels of negative evidence provided by group sequential confirmatory designs are found negligible, highlighting the importance of complementing efficacy boundaries with non-binding futility criteria.
{"title":"A conservative approach to leveraging external evidence for effective clinical trial design.","authors":"Fabio Rigat","doi":"10.1002/pst.2339","DOIUrl":"10.1002/pst.2339","url":null,"abstract":"<p><p>Prior probabilities of clinical hypotheses are not systematically used for clinical trial design yet, due to a concern that poor priors may lead to poor decisions. To address this concern, a conservative approach to Bayesian trial design is illustrated here, requiring that the operational characteristics of the primary trial outcome are stronger than the prior. This approach is complementary to current Bayesian design methods, in that it insures against prior-data conflict by defining a sample size commensurate to a discrete design prior. This approach is ethical, in that it requires designs appropriate to achieving pre-specified levels of clinical equipoise imbalance. Practical examples are discussed, illustrating design of trials with binary or time to event endpoints. Moderate increases in phase II study sample size are shown to deliver strong levels of overall evidence for go/no-go clinical development decisions. Levels of negative evidence provided by group sequential confirmatory designs are found negligible, highlighting the importance of complementing efficacy boundaries with non-binding futility criteria.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41162356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sum of the longest diameter (SLD) of the target lesions is a longitudinal biomarker used to assess tumor response in cancer clinical trials, which can inform about early treatment effect. This biomarker is semicontinuous, often characterized by an excess of zeros and right skewness. Conditional two-part joint models were introduced to account for the excess of zeros in the longitudinal biomarker distribution and link it to a time-to-event outcome. A limitation of the conditional two-part model is that it only provides an effect of covariates, such as treatment, on the conditional mean of positive biomarker values, and not an overall effect on the biomarker, which is often of clinical relevance. As an alternative, we propose in this article, a marginalized two-part joint model (M-TPJM) for the repeated measurements of the SLD and a terminal event, where the covariates affect the overall mean of the biomarker. Our simulation studies assessed the good performance of the marginalized model in terms of estimation and coverage rates. Our application of the M-TPJM to a randomized clinical trial of advanced head and neck cancer shows that the combination of panitumumab in addition with chemotherapy increases the odds of observing a disappearance of all target lesions compared to chemotherapy alone, leading to a possible indirect effect of the combined treatment on time to death.
{"title":"A marginalized two-part joint model for a longitudinal biomarker and a terminal event with application to advanced head and neck cancers.","authors":"Denis Rustand, Laurent Briollais, Virginie Rondeau","doi":"10.1002/pst.2338","DOIUrl":"10.1002/pst.2338","url":null,"abstract":"<p><p>The sum of the longest diameter (SLD) of the target lesions is a longitudinal biomarker used to assess tumor response in cancer clinical trials, which can inform about early treatment effect. This biomarker is semicontinuous, often characterized by an excess of zeros and right skewness. Conditional two-part joint models were introduced to account for the excess of zeros in the longitudinal biomarker distribution and link it to a time-to-event outcome. A limitation of the conditional two-part model is that it only provides an effect of covariates, such as treatment, on the conditional mean of positive biomarker values, and not an overall effect on the biomarker, which is often of clinical relevance. As an alternative, we propose in this article, a marginalized two-part joint model (M-TPJM) for the repeated measurements of the SLD and a terminal event, where the covariates affect the overall mean of the biomarker. Our simulation studies assessed the good performance of the marginalized model in terms of estimation and coverage rates. Our application of the M-TPJM to a randomized clinical trial of advanced head and neck cancer shows that the combination of panitumumab in addition with chemotherapy increases the odds of observing a disappearance of all target lesions compared to chemotherapy alone, leading to a possible indirect effect of the combined treatment on time to death.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10339043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multiregional clinical trials (MRCTs) have become increasingly common during the development of new drugs to obtain simultaneous drug approvals worldwide. When planning MRCTs, a major statistical challenge is determination of the regional sample size. In general, the regional sample size must be determined as the sample size such that the regional consistency probability, defined as the probability of meeting the regional consistency criterion, is greater than a prespecified value. The Japanese Ministry of Health, Labour and Welfare proposed two criteria for regional consistency. Moreover, many researchers have proposed corresponding closed-form formulas for calculating regional consistency probabilities when the primary outcome is continuous. Although some researchers have argued that those formulas are also applicable to cases with binary outcomes, it remains questionable whether such an argument can be true. Based on simulation results, we demonstrate that the existing formulas are inappropriate for binary cases, even when the regional sample size is sufficiently large. To address this issue, we develop alternative formulas and use simulation to show that they provide accurate regional consistency probabilities. Furthermore, we present an application of our proposed formulas for an MRCT of advanced or metastatic clear-cell renal cell carcinoma.
{"title":"Cautionary note on regional consistency evaluation in multiregional clinical trials with binary outcomes","authors":"Gosuke Homma","doi":"10.1002/pst.2358","DOIUrl":"https://doi.org/10.1002/pst.2358","url":null,"abstract":"Multiregional clinical trials (MRCTs) have become increasingly common during the development of new drugs to obtain simultaneous drug approvals worldwide. When planning MRCTs, a major statistical challenge is determination of the regional sample size. In general, the regional sample size must be determined as the sample size such that the regional consistency probability, defined as the probability of meeting the regional consistency criterion, is greater than a prespecified value. The Japanese Ministry of Health, Labour and Welfare proposed two criteria for regional consistency. Moreover, many researchers have proposed corresponding closed-form formulas for calculating regional consistency probabilities when the primary outcome is continuous. Although some researchers have argued that those formulas are also applicable to cases with binary outcomes, it remains questionable whether such an argument can be true. Based on simulation results, we demonstrate that the existing formulas are inappropriate for binary cases, even when the regional sample size is sufficiently large. To address this issue, we develop alternative formulas and use simulation to show that they provide accurate regional consistency probabilities. Furthermore, we present an application of our proposed formulas for an MRCT of advanced or metastatic clear-cell renal cell carcinoma.","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138823575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we propose considering an approximate exact score (AES) test for noninferiority comparisons and we derive its test-based confidence interval for the difference between two independent binomial proportions. This test was published in the literature, but not its associated confidence interval. The p-value for this test is obtained by using exact binomial probabilities with the nuisance parameter being replaced by its restricted maximum likelihood estimate. Calculated type I errors revealed that the AES method has important advantages for noninferiority comparisons over popular asymptotic methods for adequately powered confirmatory clinical trials, at 80% or 90% statistical power. For unbalanced sample sizes of the compared groups, type I errors for the asymptotic score method were shown to be higher than the nominal level in a systematic pattern over a range of true proportions, but the AES method did not suffer from such a problem. On average, the true type I error of the AES method was closer to the nominal level than all considered methods in the empirical comparisons. In rare cases, type I errors of the AES test exceeded the nominal level, but only by a small amount. Presented examples showed that the AES method can be more attractive in practice than practical exact methods. In addition, p-value and confidence interval of the AES method can be obtained in <30 s of computer time for most confirmatory trials. Theoretical arguments, combined with empirical evidence and fast computation time should make the AES method attractive in statistical practice.
在本文中,我们建议在非劣效性比较中使用近似精确分数(AES)检验,并推导出其基于检验的置信区间,用于比较两个独立二项式比例之间的差异。该检验方法已在文献中发表,但没有相关的置信区间。该检验的 p 值是通过使用精确二项概率得到的,其中的干扰参数由其受限最大似然估计值代替。I 型误差的计算结果显示,对于有足够力量的确证性临床试验,在 80% 或 90% 统计力量的情况下,AES 方法与流行的渐近方法相比,在非劣效性比较方面具有重要优势。在比较组样本量不平衡的情况下,渐近分数法的 I 型误差在真实比例范围内以系统模式高于标称水平,但 AES 法不存在这种问题。平均而言,在经验比较中,AES 方法的真实 I 型误差比所有考虑过的方法都更接近标称水平。在极少数情况下,AES 检验的 I 类误差超过了标称水平,但超出的幅度很小。举例说明表明,在实践中,AES 方法比实用的精确方法更具吸引力。此外,对于大多数确证试验,AES 方法的 p 值和置信区间可在 30 秒内通过计算机获得。理论论据、经验证据和快速计算时间的结合应使 AES 方法在统计实践中更具吸引力。
{"title":"Analysis of two binomial proportions in noninferiority confirmatory trials","authors":"Hassan Lakkis, Andrew Lakkis","doi":"10.1002/pst.2351","DOIUrl":"https://doi.org/10.1002/pst.2351","url":null,"abstract":"In this article, we propose considering an approximate exact score (AES) test for noninferiority comparisons and we derive its test-based confidence interval for the difference between two independent binomial proportions. This test was published in the literature, but not its associated confidence interval. The <i>p</i>-value for this test is obtained by using exact binomial probabilities with the nuisance parameter being replaced by its restricted maximum likelihood estimate. Calculated type I errors revealed that the AES method has important advantages for noninferiority comparisons over popular asymptotic methods for adequately powered confirmatory clinical trials, at 80% or 90% statistical power. For unbalanced sample sizes of the compared groups, type I errors for the asymptotic score method were shown to be higher than the nominal level in a systematic pattern over a range of true proportions, but the AES method did not suffer from such a problem. On average, the true type I error of the AES method was closer to the nominal level than all considered methods in the empirical comparisons. In rare cases, type I errors of the AES test exceeded the nominal level, but only by a small amount. Presented examples showed that the AES method can be more attractive in practice than practical exact methods. In addition, <i>p</i>-value and confidence interval of the AES method can be obtained in <30 s of computer time for most confirmatory trials. Theoretical arguments, combined with empirical evidence and fast computation time should make the AES method attractive in statistical practice.","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138573624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-07-10DOI: 10.1002/pst.2324
Yi-Cheng Tai, Weijing Wang, Martin T Wells
We introduce a new two-sample inference procedure to assess the relative performance of two groups over time. Our model-free method does not assume proportional hazards, making it suitable for scenarios where nonproportional hazards may exist. Our procedure includes a diagnostic tau plot to identify changes in hazard timing and a formal inference procedure. The tau-based measures we develop are clinically meaningful and provide interpretable estimands to summarize the treatment effect over time. Our proposed statistic is a U-statistic and exhibits a martingale structure, allowing us to construct confidence intervals and perform hypothesis testing. Our approach is robust with respect to the censoring distribution. We also demonstrate how our method can be applied for sensitivity analysis in scenarios with missing tail information due to insufficient follow-up. Without censoring, Kendall's tau estimator we propose reduces to the Wilcoxon-Mann-Whitney statistic. We evaluate our method using simulations to compare its performance with the restricted mean survival time and log-rank statistics. We also apply our approach to data from several published oncology clinical trials where nonproportional hazards may exist.
{"title":"Two-sample inference procedures under nonproportional hazards.","authors":"Yi-Cheng Tai, Weijing Wang, Martin T Wells","doi":"10.1002/pst.2324","DOIUrl":"10.1002/pst.2324","url":null,"abstract":"<p><p>We introduce a new two-sample inference procedure to assess the relative performance of two groups over time. Our model-free method does not assume proportional hazards, making it suitable for scenarios where nonproportional hazards may exist. Our procedure includes a diagnostic tau plot to identify changes in hazard timing and a formal inference procedure. The tau-based measures we develop are clinically meaningful and provide interpretable estimands to summarize the treatment effect over time. Our proposed statistic is a U-statistic and exhibits a martingale structure, allowing us to construct confidence intervals and perform hypothesis testing. Our approach is robust with respect to the censoring distribution. We also demonstrate how our method can be applied for sensitivity analysis in scenarios with missing tail information due to insufficient follow-up. Without censoring, Kendall's tau estimator we propose reduces to the Wilcoxon-Mann-Whitney statistic. We evaluate our method using simulations to compare its performance with the restricted mean survival time and log-rank statistics. We also apply our approach to data from several published oncology clinical trials where nonproportional hazards may exist.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10125780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-07-11DOI: 10.1002/pst.2325
Jenny Devenport, Alexander Schacht
The role and value of statistical contributions in drug development up to the point of health authority approval are well understood. But health authority approval is only a true 'win' if the evidence enables access and adoption into clinical practice. In today's complex and evolving healthcare environment, there is additional strategic evidence generation, communication, and decision support that can benefit from statistical contributions. In this article, we describe the history of medical affairs in the context of drug development, the factors driving post-approval evidence generation needs, and the opportunities for statisticians to optimize evidence generation for stakeholders beyond health authorities in order to ensure that new medicines reach appropriate patients.
{"title":"Leading beyond regulatory approval: Opportunities for statisticians to optimize evidence generation and impact clinical practice.","authors":"Jenny Devenport, Alexander Schacht","doi":"10.1002/pst.2325","DOIUrl":"10.1002/pst.2325","url":null,"abstract":"<p><p>The role and value of statistical contributions in drug development up to the point of health authority approval are well understood. But health authority approval is only a true 'win' if the evidence enables access and adoption into clinical practice. In today's complex and evolving healthcare environment, there is additional strategic evidence generation, communication, and decision support that can benefit from statistical contributions. In this article, we describe the history of medical affairs in the context of drug development, the factors driving post-approval evidence generation needs, and the opportunities for statisticians to optimize evidence generation for stakeholders beyond health authorities in order to ensure that new medicines reach appropriate patients.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9764398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-08-08DOI: 10.1002/pst.2329
Björn Holzhauer, Emmanuel Taiwo Adewuyi
The power of randomized controlled clinical trials to demonstrate the efficacy of a drug compared with a control group depends not just on how efficacious the drug is, but also on the variation in patients' outcomes. Adjusting for prognostic covariates during trial analysis can reduce this variation. For this reason, the primary statistical analysis of a clinical trial is often based on regression models that besides terms for treatment and some further terms (e.g., stratification factors used in the randomization scheme of the trial) also includes a baseline (pre-treatment) assessment of the primary outcome. We suggest to include a "super-covariate"-that is, a patient-specific prediction of the control group outcome-as a further covariate (but not as an offset). We train a prognostic model or ensembles of such models on the individual patient (or aggregate) data of other studies in similar patients, but not the new trial under analysis. This has the potential to use historical data to increase the power of clinical trials and avoids the concern of type I error inflation with Bayesian approaches, but in contrast to them has a greater benefit for larger sample sizes. It is important for prognostic models behind "super-covariates" to generalize well across different patient populations in order to similarly reduce unexplained variability whether the trial(s) to develop the model are identical to the new trial or not. In an example in neovascular age-related macular degeneration we saw efficiency gains from the use of a "super-covariate".
{"title":"\"Super-covariates\": Using predicted control group outcome as a covariate in randomized clinical trials.","authors":"Björn Holzhauer, Emmanuel Taiwo Adewuyi","doi":"10.1002/pst.2329","DOIUrl":"10.1002/pst.2329","url":null,"abstract":"<p><p>The power of randomized controlled clinical trials to demonstrate the efficacy of a drug compared with a control group depends not just on how efficacious the drug is, but also on the variation in patients' outcomes. Adjusting for prognostic covariates during trial analysis can reduce this variation. For this reason, the primary statistical analysis of a clinical trial is often based on regression models that besides terms for treatment and some further terms (e.g., stratification factors used in the randomization scheme of the trial) also includes a baseline (pre-treatment) assessment of the primary outcome. We suggest to include a \"super-covariate\"-that is, a patient-specific prediction of the control group outcome-as a further covariate (but not as an offset). We train a prognostic model or ensembles of such models on the individual patient (or aggregate) data of other studies in similar patients, but not the new trial under analysis. This has the potential to use historical data to increase the power of clinical trials and avoids the concern of type I error inflation with Bayesian approaches, but in contrast to them has a greater benefit for larger sample sizes. It is important for prognostic models behind \"super-covariates\" to generalize well across different patient populations in order to similarly reduce unexplained variability whether the trial(s) to develop the model are identical to the new trial or not. In an example in neovascular age-related macular degeneration we saw efficiency gains from the use of a \"super-covariate\".</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10332263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}