In a randomized controlled trial with time-to-event endpoint, some commonly used statistical tests to test for various aspects of survival differences, such as survival probability at a fixed time point, survival function up to a specific time point, and restricted mean survival time, may not be directly applicable when external data are leveraged to augment an arm (or both arms) of an RCT. In this paper, we propose a propensity score-integrated approach to extend such tests when external data are leveraged. Simulation studies are conducted to evaluate the operating characteristics of three propensity score-integrated statistical tests, and an illustrative example is given to demonstrate how these proposed procedures can be implemented.
{"title":"A propensity score-integrated approach for leveraging external data in a randomized controlled trial with time-to-event endpoints.","authors":"Wei-Chen Chen, Nelson Lu, Chenguang Wang, Heng Li, Changhong Song, Ram Tiwari, Yunling Xu, Lilly Q Yue","doi":"10.1002/pst.2377","DOIUrl":"10.1002/pst.2377","url":null,"abstract":"<p><p>In a randomized controlled trial with time-to-event endpoint, some commonly used statistical tests to test for various aspects of survival differences, such as survival probability at a fixed time point, survival function up to a specific time point, and restricted mean survival time, may not be directly applicable when external data are leveraged to augment an arm (or both arms) of an RCT. In this paper, we propose a propensity score-integrated approach to extend such tests when external data are leveraged. Simulation studies are conducted to evaluate the operating characteristics of three propensity score-integrated statistical tests, and an illustrative example is given to demonstrate how these proposed procedures can be implemented.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"645-661"},"PeriodicalIF":16.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-03-29DOI: 10.1002/pst.2386
Jonathan M Siegel, Hans-Jochen Weber, Stefan Englert, Feng Liu, Michelle Casey
Time-to-event estimands are central to many oncology clinical trials. The estimands framework (addendum to the ICH E9 guideline) calls for precisely defining the treatment effect of interest to align with the clinical question of interest and requires predefining the handling of intercurrent events (ICEs) that occur after treatment initiation and "affect either the interpretation or the existence of the measurements associated with the clinical question of interest." We discuss a practical problem in clinical trial design and execution, that is, in some clinical contexts it is not feasible to systematically follow patients to an event of interest. Loss to follow-up in the presence of intercurrent events can affect the meaning and interpretation of the study results. We provide recommendations for trial design, stressing the need for close alignment of the clinical question of interest and study design, impact on data collection, and other practical implications. When patients cannot be systematically followed, compromise may be necessary to select the best available estimand that can be feasibly estimated under the circumstances. We discuss the use of sensitivity and supplementary analyses to examine assumptions of interest.
{"title":"Time-to-event estimands and loss to follow-up in oncology in light of the estimands guidance.","authors":"Jonathan M Siegel, Hans-Jochen Weber, Stefan Englert, Feng Liu, Michelle Casey","doi":"10.1002/pst.2386","DOIUrl":"10.1002/pst.2386","url":null,"abstract":"<p><p>Time-to-event estimands are central to many oncology clinical trials. The estimands framework (addendum to the ICH E9 guideline) calls for precisely defining the treatment effect of interest to align with the clinical question of interest and requires predefining the handling of intercurrent events (ICEs) that occur after treatment initiation and \"affect either the interpretation or the existence of the measurements associated with the clinical question of interest.\" We discuss a practical problem in clinical trial design and execution, that is, in some clinical contexts it is not feasible to systematically follow patients to an event of interest. Loss to follow-up in the presence of intercurrent events can affect the meaning and interpretation of the study results. We provide recommendations for trial design, stressing the need for close alignment of the clinical question of interest and study design, impact on data collection, and other practical implications. When patients cannot be systematically followed, compromise may be necessary to select the best available estimand that can be feasibly estimated under the circumstances. We discuss the use of sensitivity and supplementary analyses to examine assumptions of interest.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"709-727"},"PeriodicalIF":16.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140326928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-03-10DOI: 10.1002/pst.2378
Shiwei Cao, Sin-Ho Jung
A multi-stage design for a randomized trial is to allow early termination of the study when the experimental arm is found to have low or high efficacy compared to the control during the study. In such a trial, an early stopping rule results in bias in the maximum likelihood estimator of the treatment effect. We consider multi-stage randomized trials on a dichotomous outcome, such as treatment response, and investigate the estimation of the odds ratio. Typically, randomized phase II cancer clinical trials have two-stage designs with small sample sizes, which makes the estimation of odds ratio more challenging. In this paper, we evaluate several existing estimation methods of odds ratio and propose bias-corrected estimators for randomized multi-stage trials, including randomized phase II cancer clinical trials. Through numerical studies, the proposed estimators are shown to have a smaller bias and a smaller mean squared error overall.
随机试验的多阶段设计是为了在研究过程中发现试验组与对照组相比疗效较低或较高时,允许提前终止研究。在这种试验中,提前终止规则会导致治疗效果最大似然估计值出现偏差。我们考虑了关于二分结果(如治疗反应)的多阶段随机试验,并研究了几率比的估计。通常情况下,II 期随机癌症临床试验采用两阶段设计,样本量较小,这使得几率比的估计更具挑战性。本文评估了几种现有的几率比估计方法,并提出了适用于随机多阶段试验(包括随机 II 期癌症临床试验)的偏差校正估计器。通过数值研究表明,所提出的估计方法总体上具有较小的偏差和较小的均方误差。
{"title":"Estimation of the odds ratio from multi-stage randomized trials.","authors":"Shiwei Cao, Sin-Ho Jung","doi":"10.1002/pst.2378","DOIUrl":"10.1002/pst.2378","url":null,"abstract":"<p><p>A multi-stage design for a randomized trial is to allow early termination of the study when the experimental arm is found to have low or high efficacy compared to the control during the study. In such a trial, an early stopping rule results in bias in the maximum likelihood estimator of the treatment effect. We consider multi-stage randomized trials on a dichotomous outcome, such as treatment response, and investigate the estimation of the odds ratio. Typically, randomized phase II cancer clinical trials have two-stage designs with small sample sizes, which makes the estimation of odds ratio more challenging. In this paper, we evaluate several existing estimation methods of odds ratio and propose bias-corrected estimators for randomized multi-stage trials, including randomized phase II cancer clinical trials. Through numerical studies, the proposed estimators are shown to have a smaller bias and a smaller mean squared error overall.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"662-677"},"PeriodicalIF":16.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140094414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-03-29DOI: 10.1002/pst.2387
Anders Granholm, Theis Lange, Michael O Harhay, Anders Perner, Morten Hylander Møller, Benjamin Skov Kaas-Hansen
It is unclear how sceptical priors impact adaptive trials. We assessed the influence of priors expressing a spectrum of scepticism on the performance of several Bayesian, multi-stage, adaptive clinical trial designs using binary outcomes under different clinical scenarios. Simulations were conducted using fixed stopping rules and stopping rules calibrated to keep type 1 error rates at approximately 5%. We assessed total sample sizes, event rates, event counts, probabilities of conclusiveness and selecting the best arm, root mean squared errors (RMSEs) of the estimated treatment effect in the selected arms, and ideal design percentages (IDPs; which combines arm selection probabilities, power, and consequences of selecting inferior arms), with RMSEs and IDPs estimated in conclusive trials only and after selecting the control arm in inconclusive trials. Using fixed stopping rules, increasingly sceptical priors led to larger sample sizes, more events, higher IDPs in simulations ending in superiority, and lower RMSEs, lower probabilities of conclusiveness/selecting the best arm, and lower IDPs when selecting controls in inconclusive simulations. With calibrated stopping rules, the effects of increased scepticism on sample sizes and event counts were attenuated, and increased scepticism increased the probabilities of conclusiveness/selecting the best arm and IDPs when selecting controls in inconclusive simulations without substantially increasing sample sizes. Results from trial designs with gentle adaptation and non-informative priors resembled those from designs with more aggressive adaptation using weakly-to-moderately sceptical priors. In conclusion, the use of somewhat sceptical priors in adaptive trial designs with binary outcomes seems reasonable when considering multiple performance metrics simultaneously.
{"title":"Effects of sceptical priors on the performance of adaptive clinical trials with binary outcomes.","authors":"Anders Granholm, Theis Lange, Michael O Harhay, Anders Perner, Morten Hylander Møller, Benjamin Skov Kaas-Hansen","doi":"10.1002/pst.2387","DOIUrl":"10.1002/pst.2387","url":null,"abstract":"<p><p>It is unclear how sceptical priors impact adaptive trials. We assessed the influence of priors expressing a spectrum of scepticism on the performance of several Bayesian, multi-stage, adaptive clinical trial designs using binary outcomes under different clinical scenarios. Simulations were conducted using fixed stopping rules and stopping rules calibrated to keep type 1 error rates at approximately 5%. We assessed total sample sizes, event rates, event counts, probabilities of conclusiveness and selecting the best arm, root mean squared errors (RMSEs) of the estimated treatment effect in the selected arms, and ideal design percentages (IDPs; which combines arm selection probabilities, power, and consequences of selecting inferior arms), with RMSEs and IDPs estimated in conclusive trials only and after selecting the control arm in inconclusive trials. Using fixed stopping rules, increasingly sceptical priors led to larger sample sizes, more events, higher IDPs in simulations ending in superiority, and lower RMSEs, lower probabilities of conclusiveness/selecting the best arm, and lower IDPs when selecting controls in inconclusive simulations. With calibrated stopping rules, the effects of increased scepticism on sample sizes and event counts were attenuated, and increased scepticism increased the probabilities of conclusiveness/selecting the best arm and IDPs when selecting controls in inconclusive simulations without substantially increasing sample sizes. Results from trial designs with gentle adaptation and non-informative priors resembled those from designs with more aggressive adaptation using weakly-to-moderately sceptical priors. In conclusion, the use of somewhat sceptical priors in adaptive trial designs with binary outcomes seems reasonable when considering multiple performance metrics simultaneously.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"728-741"},"PeriodicalIF":1.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11438950/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140326927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-03-04DOI: 10.1002/pst.2376
Carl-Fredrik Burman, Erik Hermansson, David Bock, Stefan Franzén, David Svensson
Recent years have seen an increasing interest in incorporating external control data for designing and evaluating randomized clinical trials (RCT). This may decrease costs and shorten inclusion times by reducing sample sizes. For small populations, with limited recruitment, this can be especially important. Bayesian dynamic borrowing (BDB) has been a popular choice as it claims to protect against potential prior data conflict. Digital twins (DT) has recently been proposed as another method to utilize historical data. DT, also known as PROCOVA™, is based on constructing a prognostic score from historical control data, typically using machine learning. This score is included in a pre-specified ANCOVA as the primary analysis of the RCT. The promise of this idea is power increase while guaranteeing strong type 1 error control. In this paper, we apply analytic derivations and simulations to analyze and discuss examples of these two approaches. We conclude that BDB and DT, although similar in scope, have fundamental differences which need be considered in the specific application. The inflation of the type 1 error is a serious issue for BDB, while more evidence is needed of a tangible value of DT for real RCTs.
{"title":"Digital twins and Bayesian dynamic borrowing: Two recent approaches for incorporating historical control data.","authors":"Carl-Fredrik Burman, Erik Hermansson, David Bock, Stefan Franzén, David Svensson","doi":"10.1002/pst.2376","DOIUrl":"10.1002/pst.2376","url":null,"abstract":"<p><p>Recent years have seen an increasing interest in incorporating external control data for designing and evaluating randomized clinical trials (RCT). This may decrease costs and shorten inclusion times by reducing sample sizes. For small populations, with limited recruitment, this can be especially important. Bayesian dynamic borrowing (BDB) has been a popular choice as it claims to protect against potential prior data conflict. Digital twins (DT) has recently been proposed as another method to utilize historical data. DT, also known as PROCOVA™, is based on constructing a prognostic score from historical control data, typically using machine learning. This score is included in a pre-specified ANCOVA as the primary analysis of the RCT. The promise of this idea is power increase while guaranteeing strong type 1 error control. In this paper, we apply analytic derivations and simulations to analyze and discuss examples of these two approaches. We conclude that BDB and DT, although similar in scope, have fundamental differences which need be considered in the specific application. The inflation of the type 1 error is a serious issue for BDB, while more evidence is needed of a tangible value of DT for real RCTs.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"611-629"},"PeriodicalIF":16.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140028706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-03-20DOI: 10.1002/pst.2382
Julie Funch Furberg, Per Kragh Andersen, Thomas Scheike, Henrik Ravn
In randomised controlled trials, the outcome of interest could be recurrent events, such as hospitalisations for heart failure. If mortality rates are non-negligible, both recurrent events and competing terminal events need to be addressed when formulating the estimand and statistical analysis is no longer trivial. In order to design future trials with primary recurrent event endpoints with competing risks, it is necessary to be able to perform power calculations to determine sample sizes. This paper introduces a simulation-based approach for power estimation based on a proportional means model for recurrent events and a proportional hazards model for terminal events. The simulation procedure is presented along with a discussion of what the user needs to specify to use the approach. The method is flexible and based on marginal quantities which are easy to specify. However, the method introduces a lack of a certain type of dependence. This is explored in a sensitivity analysis which suggests that the power is robust in spite of that. Data from a randomised controlled trial, LEADER, is used as the basis for generating data for a future trial. Finally, potential power gains of recurrent event methods as opposed to first event methods are discussed.
{"title":"Simulation-based sample size calculations of marginal proportional means models for recurrent events with competing risks.","authors":"Julie Funch Furberg, Per Kragh Andersen, Thomas Scheike, Henrik Ravn","doi":"10.1002/pst.2382","DOIUrl":"10.1002/pst.2382","url":null,"abstract":"<p><p>In randomised controlled trials, the outcome of interest could be recurrent events, such as hospitalisations for heart failure. If mortality rates are non-negligible, both recurrent events and competing terminal events need to be addressed when formulating the estimand and statistical analysis is no longer trivial. In order to design future trials with primary recurrent event endpoints with competing risks, it is necessary to be able to perform power calculations to determine sample sizes. This paper introduces a simulation-based approach for power estimation based on a proportional means model for recurrent events and a proportional hazards model for terminal events. The simulation procedure is presented along with a discussion of what the user needs to specify to use the approach. The method is flexible and based on marginal quantities which are easy to specify. However, the method introduces a lack of a certain type of dependence. This is explored in a sensitivity analysis which suggests that the power is robust in spite of that. Data from a randomised controlled trial, LEADER, is used as the basis for generating data for a future trial. Finally, potential power gains of recurrent event methods as opposed to first event methods are discussed.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"687-708"},"PeriodicalIF":16.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140175941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sample size of a clinical trial has to be large enough to ensure sufficient power for achieving the aim the study. On the other side, for ethical and economical reasons it should not be larger than necessary. The sample size allocation is one of the parameters that influences the required total sample size. For two-arm superiority and non-inferiority trials with binary endpoints, we performed extensive computations over a wide range of scenarios to determine the optimal allocation ratio that minimizes the total sample size if all other parameters are fixed. The results demonstrate, that for both superiority and non-inferiority trials the optimal allocation may deviate considerably from the case of equal sample size in both groups. However, the saving in sample size when allocating the total sample size optimally as compared to balanced allocation is typically small.
{"title":"Optimal sample size allocation for two-arm superiority and non-inferiority trials with binary endpoints.","authors":"Marietta Kirchner, Stefanie Schüpke, Meinhard Kieser","doi":"10.1002/pst.2375","DOIUrl":"10.1002/pst.2375","url":null,"abstract":"<p><p>The sample size of a clinical trial has to be large enough to ensure sufficient power for achieving the aim the study. On the other side, for ethical and economical reasons it should not be larger than necessary. The sample size allocation is one of the parameters that influences the required total sample size. For two-arm superiority and non-inferiority trials with binary endpoints, we performed extensive computations over a wide range of scenarios to determine the optimal allocation ratio that minimizes the total sample size if all other parameters are fixed. The results demonstrate, that for both superiority and non-inferiority trials the optimal allocation may deviate considerably from the case of equal sample size in both groups. However, the saving in sample size when allocating the total sample size optimally as compared to balanced allocation is typically small.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"678-686"},"PeriodicalIF":16.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140111096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-02-05DOI: 10.1002/pst.2362
Ahrim Youn, Jiarui Chi, Yue Cui, Hui Quan
In recently conducted phase III trials in a rare disease area, patients received monthly treatment at a high dose of the drug, which targets to lower a specific biomarker level, closely associated with the efficacy endpoint, to around 10% across patients. Although this high dose demonstrated strong efficacy, treatments were withheld due to the reports of serious adverse events. Dosing in these studies were later resumed at a reduced dosage which targets to lower the biomarker level to 15%-35% across patients. Two questions arose after this disruption. The first is whether the efficacy of this revised regimen as measured by the reduction in annualized event rate is adequate to support the continuation of the development and the second is whether the potential bias due to the loss of patients during this dosing gap process can be gauged. To address these questions, we built a prediction model that quantitatively characterizes biomarker vs. endpoint relationship and predicts efficacy at the 15%-35% range of the biomarker level using the available data from the original high dose. This model predicts favorable event rate in the target biomarker level and shows that the bias due to the loss of patients is limited. These results support the continued development of the revised regimen, however, given the limitation of the data available, this prediction is planned to be validated further when data under the revised regimen become available.
最近在一个罕见病领域开展的 III 期试验中,患者每月接受一次高剂量药物治疗,目标是将与疗效终点密切相关的特定生物标志物水平降至患者的 10%左右。虽然这种高剂量药物显示出很强的疗效,但由于出现了严重的不良反应,治疗被迫中止。后来,这些研究恢复了减量给药,目标是将患者的生物标志物水平降至 15%-35%。这次中断后出现了两个问题。第一个问题是,根据年化事件发生率的降低程度来衡量,这一修订方案的疗效是否足以支持继续开发;第二个问题是,是否可以衡量在这一剂量间隙过程中因患者流失而产生的潜在偏差。为了解决这些问题,我们建立了一个预测模型,定量描述生物标志物与终点的关系,并利用原始高剂量的可用数据预测生物标志物水平在 15%-35% 范围内的疗效。该模型预测了目标生物标志物水平的有利事件发生率,并表明由于患者流失造成的偏差是有限的。这些结果支持继续开发修订后的治疗方案,但鉴于现有数据的局限性,计划在获得修订后治疗方案的数据后进一步验证这一预测。
{"title":"A case study: Assessing the efficacy of the revised dosage regimen via prediction model for recurrent event rate using biomarker data.","authors":"Ahrim Youn, Jiarui Chi, Yue Cui, Hui Quan","doi":"10.1002/pst.2362","DOIUrl":"10.1002/pst.2362","url":null,"abstract":"<p><p>In recently conducted phase III trials in a rare disease area, patients received monthly treatment at a high dose of the drug, which targets to lower a specific biomarker level, closely associated with the efficacy endpoint, to around 10% across patients. Although this high dose demonstrated strong efficacy, treatments were withheld due to the reports of serious adverse events. Dosing in these studies were later resumed at a reduced dosage which targets to lower the biomarker level to 15%-35% across patients. Two questions arose after this disruption. The first is whether the efficacy of this revised regimen as measured by the reduction in annualized event rate is adequate to support the continuation of the development and the second is whether the potential bias due to the loss of patients during this dosing gap process can be gauged. To address these questions, we built a prediction model that quantitatively characterizes biomarker vs. endpoint relationship and predicts efficacy at the 15%-35% range of the biomarker level using the available data from the original high dose. This model predicts favorable event rate in the target biomarker level and shows that the bias due to the loss of patients is limited. These results support the continued development of the revised regimen, however, given the limitation of the data available, this prediction is planned to be validated further when data under the revised regimen become available.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"570-584"},"PeriodicalIF":1.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139692642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-02-07DOI: 10.1002/pst.2368
Björn Bornkamp, Silvia Zaoli, Michela Azzarito, Ruvie Martin, Carsten Philipp Müller, Conor Moloney, Giulia Capestro, David Ohlssen, Mark Baillie
We present the motivation, experience, and learnings from a data challenge conducted at a large pharmaceutical corporation on the topic of subgroup identification. The data challenge aimed at exploring approaches to subgroup identification for future clinical trials. To mimic a realistic setting, participants had access to 4 Phase III clinical trials to derive a subgroup and predict its treatment effect on a future study not accessible to challenge participants. A total of 30 teams registered for the challenge with around 100 participants, primarily from Biostatistics organization. We outline the motivation for running the challenge, the challenge rules, and logistics. Finally, we present the results of the challenge, the participant feedback as well as the learnings. We also present our view on the implications of the results on exploratory analyses related to treatment effect heterogeneity.
我们介绍了一家大型制药公司就亚组识别主题开展的数据挑战赛的动机、经验和教训。数据挑战旨在探索未来临床试验的亚组识别方法。为了模拟现实环境,参赛者可以访问 4 项 III 期临床试验,以得出一个亚组,并预测其对挑战者无法访问的未来研究的治疗效果。共有 30 个团队报名参加挑战赛,参赛者约 100 人,主要来自生物统计学组织。我们概述了举办挑战赛的动机、挑战赛规则和后勤工作。最后,我们介绍了挑战赛的结果、参赛者的反馈以及学习成果。我们还介绍了我们对与治疗效果异质性相关的探索性分析结果的影响的看法。
{"title":"Predicting subgroup treatment effects for a new study: Motivations, results and learnings from running a data challenge in a pharmaceutical corporation.","authors":"Björn Bornkamp, Silvia Zaoli, Michela Azzarito, Ruvie Martin, Carsten Philipp Müller, Conor Moloney, Giulia Capestro, David Ohlssen, Mark Baillie","doi":"10.1002/pst.2368","DOIUrl":"10.1002/pst.2368","url":null,"abstract":"<p><p>We present the motivation, experience, and learnings from a data challenge conducted at a large pharmaceutical corporation on the topic of subgroup identification. The data challenge aimed at exploring approaches to subgroup identification for future clinical trials. To mimic a realistic setting, participants had access to 4 Phase III clinical trials to derive a subgroup and predict its treatment effect on a future study not accessible to challenge participants. A total of 30 teams registered for the challenge with around 100 participants, primarily from Biostatistics organization. We outline the motivation for running the challenge, the challenge rules, and logistics. Finally, we present the results of the challenge, the participant feedback as well as the learnings. We also present our view on the implications of the results on exploratory analyses related to treatment effect heterogeneity.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"495-510"},"PeriodicalIF":1.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139703123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-02-08DOI: 10.1002/pst.2365
Awa Diop, Alind Gupta, Sabrina Mueller, Louis Dron, Ofir Harari, Heather Berringer, Vinusha Kalatharan, Jay J H Park, Miceline Mésidor, Denis Talbot
It is well known that medication adherence is critical to patient outcomes and can decrease patient mortality. The Pharmacy Quality Alliance (PQA) has recognized and identified medication adherence as an important indicator of medication-use quality. Hence, there is a need to use the right methods to assess medication adherence. The PQA has endorsed the proportion of days covered (PDC) as the primary method of measuring adherence. Although easy to calculate, the PDC has however several drawbacks as a method of measuring adherence. PDC is a deterministic approach that cannot capture the complexity of a dynamic phenomenon. Group-based trajectory modeling (GBTM) is increasingly proposed as an alternative to capture heterogeneity in medication adherence. The main goal of this paper is to demonstrate, through a simulation study, the ability of GBTM to capture treatment adherence when compared to its deterministic PDC analogue and to the nonparametric longitudinal K-means. A time-varying treatment was generated as a quadratic function of time, baseline, and time-varying covariates. Three trajectory models are considered combining a cat's cradle effect, and a rainbow effect. The performance of GBTM was compared to the PDC and longitudinal K-means using the absolute bias, the variance, the c-statistics, the relative bias, and the relative variance. For all explored scenarios, we find that GBTM performed better in capturing different patterns of medication adherence with lower relative bias and variance even under model misspecification than PDC and longitudinal K-means.
{"title":"Assessing the performance of group-based trajectory modeling method to discover different patterns of medication adherence.","authors":"Awa Diop, Alind Gupta, Sabrina Mueller, Louis Dron, Ofir Harari, Heather Berringer, Vinusha Kalatharan, Jay J H Park, Miceline Mésidor, Denis Talbot","doi":"10.1002/pst.2365","DOIUrl":"10.1002/pst.2365","url":null,"abstract":"<p><p>It is well known that medication adherence is critical to patient outcomes and can decrease patient mortality. The Pharmacy Quality Alliance (PQA) has recognized and identified medication adherence as an important indicator of medication-use quality. Hence, there is a need to use the right methods to assess medication adherence. The PQA has endorsed the proportion of days covered (PDC) as the primary method of measuring adherence. Although easy to calculate, the PDC has however several drawbacks as a method of measuring adherence. PDC is a deterministic approach that cannot capture the complexity of a dynamic phenomenon. Group-based trajectory modeling (GBTM) is increasingly proposed as an alternative to capture heterogeneity in medication adherence. The main goal of this paper is to demonstrate, through a simulation study, the ability of GBTM to capture treatment adherence when compared to its deterministic PDC analogue and to the nonparametric longitudinal K-means. A time-varying treatment was generated as a quadratic function of time, baseline, and time-varying covariates. Three trajectory models are considered combining a cat's cradle effect, and a rainbow effect. The performance of GBTM was compared to the PDC and longitudinal K-means using the absolute bias, the variance, the c-statistics, the relative bias, and the relative variance. For all explored scenarios, we find that GBTM performed better in capturing different patterns of medication adherence with lower relative bias and variance even under model misspecification than PDC and longitudinal K-means.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"511-529"},"PeriodicalIF":1.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139703122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}