Pub Date : 2024-04-29DOI: 10.1177/17407745241243307
Daniel F Heitjan
{"title":"Comment on “Causal interpretation of the hazard ratio in randomized clinical trials” by Fay and Li","authors":"Daniel F Heitjan","doi":"10.1177/17407745241243307","DOIUrl":"https://doi.org/10.1177/17407745241243307","url":null,"abstract":"","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140830296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-27DOI: 10.1177/17407745241243045
Tom Gugel, Karen Adams, Madelon Baranoski, N David Yanez, Michael Kampp, Tesheia Johnson, Ani Aydin, Elaine C Fajardo, Emily Sharp, Aartee Potnis, Chanel Johnson, Miriam M Treggiari
Introduction:Emergency clinical research has played an important role in improving outcomes for acutely ill patients. This is due in part to regulatory measures that allow Exception From Informed Consent (EFIC) trials. The Food and Drug Administration (FDA) requires sponsor-investigators to engage in community consultation and public disclosure activities prior to initiating an Exception From Informed Consent trial. Various approaches to community consultation and public disclosure have been described and adapted to local contexts and Institutional Review Board (IRB) interpretations. The COVID-19 pandemic has precluded the ability to engage local communities through direct, in-person public venues, requiring research teams to find alternative ways to inform communities about emergency research.Methods:The PreVent and PreVent 2 studies were two Exception From Informed Consent trials of emergency endotracheal intubation, conducted in one geographic location for the PreVent Study and in two geographic locations for the PreVent 2 Study. During the period of the two studies, there was a substantial shift in the methodological approach spanning across the periods before and after the pandemic from telephone, to in-person, to virtual settings.Results:During the 10 years of implementation of Exception From Informed Consent activities for the two PreVent trials, there was overall favorable public support for the concept of Exception From Informed Consent trials and for the importance of emergency clinical research. Community concerns were few and also did not differ much by method of contact. Attendance was higher with the implementation of virtual technology to reach members of the community, and overall feedback was more positive compared with telephone contacts or in-person events. However, the proportion of survey responses received after completion of the remote, live event was substantially lower, with a greater proportion of respondents having higher education levels. This suggests less active engagement after completion of the synchronous activity and potentially higher selection bias among respondents. Importantly, we found that engagement with local community leaders was a key component to develop appropriate plans to connect with the public.Conclusion:The PreVent experience illustrated operational advantages and disadvantages to community consultation conducted primarily by telephone, in-person events, or online activities. Approaches to enhance community acceptance included partnering with community leaders to optimize the communication strategies and trust building with the involvement of Institutional Review Board representatives during community meetings. Researchers might need to pivot from in-person planning to virtual techniques while maintaining the ability to engage with the public with two-way communication approaches. Due to less active engagement, and potential for selection bias in the responders, further research is needed to addr
{"title":"Design and implementation of community consultation for research conducted under exception from informed consent regulations for the PreVent and the PreVent 2 trials: Changes over time and during the COVID-19 pandemic","authors":"Tom Gugel, Karen Adams, Madelon Baranoski, N David Yanez, Michael Kampp, Tesheia Johnson, Ani Aydin, Elaine C Fajardo, Emily Sharp, Aartee Potnis, Chanel Johnson, Miriam M Treggiari","doi":"10.1177/17407745241243045","DOIUrl":"https://doi.org/10.1177/17407745241243045","url":null,"abstract":"Introduction:Emergency clinical research has played an important role in improving outcomes for acutely ill patients. This is due in part to regulatory measures that allow Exception From Informed Consent (EFIC) trials. The Food and Drug Administration (FDA) requires sponsor-investigators to engage in community consultation and public disclosure activities prior to initiating an Exception From Informed Consent trial. Various approaches to community consultation and public disclosure have been described and adapted to local contexts and Institutional Review Board (IRB) interpretations. The COVID-19 pandemic has precluded the ability to engage local communities through direct, in-person public venues, requiring research teams to find alternative ways to inform communities about emergency research.Methods:The PreVent and PreVent 2 studies were two Exception From Informed Consent trials of emergency endotracheal intubation, conducted in one geographic location for the PreVent Study and in two geographic locations for the PreVent 2 Study. During the period of the two studies, there was a substantial shift in the methodological approach spanning across the periods before and after the pandemic from telephone, to in-person, to virtual settings.Results:During the 10 years of implementation of Exception From Informed Consent activities for the two PreVent trials, there was overall favorable public support for the concept of Exception From Informed Consent trials and for the importance of emergency clinical research. Community concerns were few and also did not differ much by method of contact. Attendance was higher with the implementation of virtual technology to reach members of the community, and overall feedback was more positive compared with telephone contacts or in-person events. However, the proportion of survey responses received after completion of the remote, live event was substantially lower, with a greater proportion of respondents having higher education levels. This suggests less active engagement after completion of the synchronous activity and potentially higher selection bias among respondents. Importantly, we found that engagement with local community leaders was a key component to develop appropriate plans to connect with the public.Conclusion:The PreVent experience illustrated operational advantages and disadvantages to community consultation conducted primarily by telephone, in-person events, or online activities. Approaches to enhance community acceptance included partnering with community leaders to optimize the communication strategies and trust building with the involvement of Institutional Review Board representatives during community meetings. Researchers might need to pivot from in-person planning to virtual techniques while maintaining the ability to engage with the public with two-way communication approaches. Due to less active engagement, and potential for selection bias in the responders, further research is needed to addr","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140812301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-23DOI: 10.1177/17407745241244753
Laura A. Levit, E. Garrett-Mayer, Jeffrey Peppercorn, M. Ratain
This article reviews the implementation challenges to the American Society of Clinical Oncology's ethical framework for including research biopsies in oncology clinical trials. The primary challenges to implementation relate to the definitions of secondary endpoints, the scientific and regulatory framework, and the incentive structure that encourages inclusion of biopsies. Principles of research stewardship require that the clinical trials community correctly articulate the scientific goals of any research biopsies, especially those that are required for the patient to enroll on a trial and receive an investigational agent. Furthermore, it is important to sufficiently justify the characterization of secondary (as distinguished from exploratory) endpoints, protect the interest of research participants, and report accurate and complete information to ClinicalTrials.gov and the published literature.
{"title":"Critical importance of correctly defining and reporting secondary endpoints when assessing the ethics of research biopsies.","authors":"Laura A. Levit, E. Garrett-Mayer, Jeffrey Peppercorn, M. Ratain","doi":"10.1177/17407745241244753","DOIUrl":"https://doi.org/10.1177/17407745241244753","url":null,"abstract":"This article reviews the implementation challenges to the American Society of Clinical Oncology's ethical framework for including research biopsies in oncology clinical trials. The primary challenges to implementation relate to the definitions of secondary endpoints, the scientific and regulatory framework, and the incentive structure that encourages inclusion of biopsies. Principles of research stewardship require that the clinical trials community correctly articulate the scientific goals of any research biopsies, especially those that are required for the patient to enroll on a trial and receive an investigational agent. Furthermore, it is important to sufficiently justify the characterization of secondary (as distinguished from exploratory) endpoints, protect the interest of research participants, and report accurate and complete information to ClinicalTrials.gov and the published literature.","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140666576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-22DOI: 10.1177/17407745241244790
Philip M. Westgate, Shawn R. Nigam, Abigail B Shoben
BACKGROUND/AIMS When designing a cluster randomized trial, advantages and disadvantages of tentative designs must be weighed. The stepped wedge design is popular for multiple reasons, including its potential to increase power via improved efficiency relative to a parallel-group design. In many realistic settings, it will take time for clusters to fully implement the intervention. When designing the HEALing (Helping to End Addiction Long-termSM) Communities Study, implementation time was a major consideration, and we examined the efficiency and practicality of three designs. Specifically, a three-sequence stepped wedge design with implementation periods, a corresponding two-sequence modified design that is created by removing the middle sequence, and a parallel-group design with baseline and implementation periods. In this article, we study the relative efficiencies of these specific designs. More generally, we study the relative efficiencies of modified designs when the stepped wedge design with implementation periods has three or more sequences. We also consider different correlation structures. METHODS We compare efficiencies of stepped wedge designs with implementation periods consisting of three to nine sequences with a variety of corresponding designs. The three-sequence design is compared to the two-sequence modified design and to the parallel-group design with baseline and implementation periods analysed via analysis of covariance. Stepped wedge designs with implementation periods consisting of four or more sequences are compared to modified designs that remove all or a subset of 'middle' sequences. Efficiencies are based on the use of linear mixed effects models. RESULTS In the studied settings, the modified design is more efficient than the three-sequence stepped wedge design with implementation periods. The parallel-group design with baseline and implementation periods with analysis of covariance-based analysis is often more efficient than the three-sequence design. With respect to stepped wedge designs with implementation periods that are comprised of more sequences, there are often corresponding modified designs that improve efficiency. However, use of only the first and last sequences has the potential to be either relatively efficient or inefficient. Relative efficiency is impacted by the strength of the statistical correlation among outcomes from the same cluster; for example, the relative efficiencies of modified designs tend to be greater for smaller cluster auto-correlation values. CONCLUSION If a three-sequence stepped wedge design with implementation periods is being considered for a future cluster randomized trial, then a corresponding modified design using only the first and last sequences should be considered if sole focus is on efficiency. However, a parallel-group design with baseline and implementation periods and analysis of covariance-based analysis can be a practical, efficient alternative. For stepped wedge des
背景/目的在设计分组随机试验时,必须权衡暂定设计的优缺点。阶梯式楔形设计很受欢迎,原因有很多,包括相对于平行组设计,它有可能通过提高效率来增加力量。在许多现实环境中,分组完全实施干预措施需要时间。在设计 HEALing(Helping to End Addiction Long-termSM)社区研究时,实施时间是一个主要考虑因素,因此我们研究了三种设计的效率和实用性。具体来说,一种是带有实施期的三序列阶梯式楔形设计,一种是去掉中间序列后相应的两序列修正设计,还有一种是带有基线期和实施期的平行组设计。本文将研究这些具体设计的相对效率。更广泛地说,我们研究的是当带有实施期的阶梯楔形设计有三个或更多序列时,改进设计的相对效率。我们还考虑了不同的相关结构。我们比较了实施期由三到九个序列组成的阶梯楔形设计与各种相应设计的效率。通过协方差分析,我们将三序列设计与两序列改进设计以及基线期和实施期平行组设计进行了比较。将实施期包含四个或更多序列的阶梯楔形设计与去掉所有或部分 "中间 "序列的改进设计进行比较。结果 在所研究的环境中,改进设计比具有实施期的三序列阶梯楔形设计更有效。基于协方差分析的基线期和实施期平行组设计通常比三序列设计更有效。对于由更多序列组成的有实施期的阶梯楔形设计,通常有相应的改进设计来提高效率。不过,仅使用第一个和最后一个序列可能会相对高效或低效。相对效率受同一群组结果间统计相关性强弱的影响;例如,群组自相关值越小,改进设计的相对效率越高。结论如果在未来的群组随机试验中考虑采用有实施期的三序列阶梯楔形设计,那么如果只注重效率,则应考虑只使用第一和最后序列的相应改进设计。不过,具有基线期和实施期的平行组设计以及基于协方差的分析也是一种实用、高效的替代方法。对于具有实施期和更多序列的阶梯楔形设计,应考虑去除 "中间 "序列的改进版本。由于设计效率的潜在敏感性,应仔细考虑统计相关性。
{"title":"Reconsidering stepped wedge cluster randomized trial designs with implementation periods: Fewer sequences or the parallel-group design with baseline and implementation periods are potentially more efficient.","authors":"Philip M. Westgate, Shawn R. Nigam, Abigail B Shoben","doi":"10.1177/17407745241244790","DOIUrl":"https://doi.org/10.1177/17407745241244790","url":null,"abstract":"BACKGROUND/AIMS\u0000When designing a cluster randomized trial, advantages and disadvantages of tentative designs must be weighed. The stepped wedge design is popular for multiple reasons, including its potential to increase power via improved efficiency relative to a parallel-group design. In many realistic settings, it will take time for clusters to fully implement the intervention. When designing the HEALing (Helping to End Addiction Long-termSM) Communities Study, implementation time was a major consideration, and we examined the efficiency and practicality of three designs. Specifically, a three-sequence stepped wedge design with implementation periods, a corresponding two-sequence modified design that is created by removing the middle sequence, and a parallel-group design with baseline and implementation periods. In this article, we study the relative efficiencies of these specific designs. More generally, we study the relative efficiencies of modified designs when the stepped wedge design with implementation periods has three or more sequences. We also consider different correlation structures.\u0000\u0000\u0000METHODS\u0000We compare efficiencies of stepped wedge designs with implementation periods consisting of three to nine sequences with a variety of corresponding designs. The three-sequence design is compared to the two-sequence modified design and to the parallel-group design with baseline and implementation periods analysed via analysis of covariance. Stepped wedge designs with implementation periods consisting of four or more sequences are compared to modified designs that remove all or a subset of 'middle' sequences. Efficiencies are based on the use of linear mixed effects models.\u0000\u0000\u0000RESULTS\u0000In the studied settings, the modified design is more efficient than the three-sequence stepped wedge design with implementation periods. The parallel-group design with baseline and implementation periods with analysis of covariance-based analysis is often more efficient than the three-sequence design. With respect to stepped wedge designs with implementation periods that are comprised of more sequences, there are often corresponding modified designs that improve efficiency. However, use of only the first and last sequences has the potential to be either relatively efficient or inefficient. Relative efficiency is impacted by the strength of the statistical correlation among outcomes from the same cluster; for example, the relative efficiencies of modified designs tend to be greater for smaller cluster auto-correlation values.\u0000\u0000\u0000CONCLUSION\u0000If a three-sequence stepped wedge design with implementation periods is being considered for a future cluster randomized trial, then a corresponding modified design using only the first and last sequences should be considered if sole focus is on efficiency. However, a parallel-group design with baseline and implementation periods and analysis of covariance-based analysis can be a practical, efficient alternative. For stepped wedge des","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140672482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.1177/17407745241244788
Karen M Higgins, Gregory Levin, Robert Busch
Randomization and blinding are regarded as the most important tools to help reduce bias in clinical trial designs. Randomization is used to help guarantee that treatment arms differ systematically only by treatment assignment at baseline, and blinding is used to ensure that differences in endpoint evaluation and clinical decision-making during the trial arise only from the treatment received and not, for example, the expectation or desires of the people involved. However, given that there are times when it is not feasible or ethical to conduct fully blinded trials, we discuss what can be done to improve a trial, including conducting the trial as if it were a fully blinded trial and maintaining confidentiality of ongoing study results. In this article, we review how best to design, conduct, and analyze open-label trials to ensure the highest level of study integrity and the reliability of the study conclusions.
{"title":"Considerations for open-label randomized clinical trials: Design, conduct, and analysis","authors":"Karen M Higgins, Gregory Levin, Robert Busch","doi":"10.1177/17407745241244788","DOIUrl":"https://doi.org/10.1177/17407745241244788","url":null,"abstract":"Randomization and blinding are regarded as the most important tools to help reduce bias in clinical trial designs. Randomization is used to help guarantee that treatment arms differ systematically only by treatment assignment at baseline, and blinding is used to ensure that differences in endpoint evaluation and clinical decision-making during the trial arise only from the treatment received and not, for example, the expectation or desires of the people involved. However, given that there are times when it is not feasible or ethical to conduct fully blinded trials, we discuss what can be done to improve a trial, including conducting the trial as if it were a fully blinded trial and maintaining confidentiality of ongoing study results. In this article, we review how best to design, conduct, and analyze open-label trials to ensure the highest level of study integrity and the reliability of the study conclusions.","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.1177/17407745241238443
Dan-Yu Lin, Jianqiao Wang, Yu Gu, Donglin Zeng
BackgroundThe current endpoints for therapeutic trials of hospitalized COVID-19 patients capture only part of the clinical course of a patient and have limited statistical power and robustness.MethodsWe specify proportional odds models for repeated measures of clinical status, with a common odds ratio of lower severity over time. We also specify the proportional hazards model for time to each level of improvement or deterioration of clinical status, with a common hazard ratio for overall treatment benefit. We apply these methods to Adaptive COVID-19 Treatment Trials.ResultsFor remdesivir versus placebo, the common odds ratio was 1.48 (95% confidence interval (CI) = 1.23–1.79; p < 0.001), and the common hazard ratio was 1.27 (95% CI = 1.09–1.47; p = 0.002). For baricitinib plus remdesivir versus remdesivir alone, the common odds ratio was 1.32 (95% CI = 1.10–1.57; p = 0.002), and the common hazard ratio was 1.30 (95% CI = 1.13–1.49; p < 0.001). For interferon beta-1a plus remdesivir versus remdesivir alone, the common odds ratio was 0.95 (95% CI = 0.79–1.14; p = 0.56), and the common hazard ratio was 0.98 (95% CI = 0.85–1.12; p = 0.74).ConclusionsThe proposed methods comprehensively characterize the treatment effects on the entire clinical course of a hospitalized COVID-19 patient.
背景目前针对住院 COVID-19 患者的治疗试验终点仅能捕捉到患者临床过程的一部分,其统计能力和稳健性有限。我们还为临床状况的每一级改善或恶化指定了比例危险模型,并为总体治疗获益设定了共同危险比。我们将这些方法应用于自适应 COVID-19 治疗试验。结果对于雷米替韦与安慰剂相比,常见的几率比为 1.48(95% 置信区间 (CI) = 1.23-1.79;p < 0.001),常见的危险比为 1.27(95% CI = 1.09-1.47;p = 0.002)。巴利替尼加雷米替韦与单用雷米替韦相比,共同几率比为1.32(95% CI = 1.10-1.57;p = 0.002),共同危险比为1.30(95% CI = 1.13-1.49;p <;0.001)。对于β-1a干扰素加雷米替韦与单用雷米替韦,共同几率比为0.95 (95% CI = 0.79-1.14; p = 0.56),共同危险比为0.98 (95% CI = 0.85-1.12; p = 0.74)。
{"title":"Evaluating treatment efficacy in hospitalized COVID-19 patients, with applications to Adaptive COVID-19 Treatment Trials","authors":"Dan-Yu Lin, Jianqiao Wang, Yu Gu, Donglin Zeng","doi":"10.1177/17407745241238443","DOIUrl":"https://doi.org/10.1177/17407745241238443","url":null,"abstract":"BackgroundThe current endpoints for therapeutic trials of hospitalized COVID-19 patients capture only part of the clinical course of a patient and have limited statistical power and robustness.MethodsWe specify proportional odds models for repeated measures of clinical status, with a common odds ratio of lower severity over time. We also specify the proportional hazards model for time to each level of improvement or deterioration of clinical status, with a common hazard ratio for overall treatment benefit. We apply these methods to Adaptive COVID-19 Treatment Trials.ResultsFor remdesivir versus placebo, the common odds ratio was 1.48 (95% confidence interval (CI) = 1.23–1.79; p < 0.001), and the common hazard ratio was 1.27 (95% CI = 1.09–1.47; p = 0.002). For baricitinib plus remdesivir versus remdesivir alone, the common odds ratio was 1.32 (95% CI = 1.10–1.57; p = 0.002), and the common hazard ratio was 1.30 (95% CI = 1.13–1.49; p < 0.001). For interferon beta-1a plus remdesivir versus remdesivir alone, the common odds ratio was 0.95 (95% CI = 0.79–1.14; p = 0.56), and the common hazard ratio was 0.98 (95% CI = 0.85–1.12; p = 0.74).ConclusionsThe proposed methods comprehensively characterize the treatment effects on the entire clinical course of a hospitalized COVID-19 patient.","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.1177/17407745241240401
Cody Chiuzan, Hakim-Moulay Dehbi
In the last few years, numerous novel designs have been proposed to improve the efficiency and accuracy of phase I trials to identify the maximum-tolerated dose (MTD) or the optimal biological dose (OBD) for noncytotoxic agents. However, the conventional 3+3 approach, known for its and poor performance, continues to be an attractive choice for many trials despite these alternative suggestions. The article seeks to underscore the importance of moving beyond the 3+3 design by highlighting a different key element in trial design: the estimation of sample size and its crucial role in predicting toxicity and determining the MTD. We use simulation studies to compare the performance of the most used phase I approaches: 3+3, Continual Reassessment Method (CRM), Keyboard and Bayesian Optimal Interval (BOIN) designs regarding three key operating characteristics: the percentage of correct selection of the true MTD, the average number of patients allocated per dose level, and the average total sample size. The simulation results consistently show that the 3+3 algorithm underperforms in comparison to model-based and model-assisted designs across all scenarios and metrics. The 3+3 method yields significantly lower (up to three times) probabilities in identifying the correct MTD, often selecting doses one or even two levels below the actual MTD. The 3+3 design allocates significantly fewer patients at the true MTD, assigns higher numbers to lower dose levels, and rarely explores doses above the target dose-limiting toxicity (DLT) rate. The overall performance of the 3+3 method is suboptimal, with a high level of unexplained uncertainty and significant implications for accurately determining the MTD. While the primary focus of the article is to demonstrate the limitations of the 3+3 algorithm, the question remains about the preferred alternative approach. The intention is not to definitively recommend one model-based or model-assisted method over others, as their performance can vary based on parameters and model specifications. However, the presented results indicate that the CRM, Keyboard, and BOIN designs consistently outperform the 3+3 and offer improved efficiency and precision in determining the MTD, which is crucial in early-phase clinical trials.
{"title":"The 3 + 3 design in dose-finding studies with small sample sizes: Pitfalls and possible remedies","authors":"Cody Chiuzan, Hakim-Moulay Dehbi","doi":"10.1177/17407745241240401","DOIUrl":"https://doi.org/10.1177/17407745241240401","url":null,"abstract":"In the last few years, numerous novel designs have been proposed to improve the efficiency and accuracy of phase I trials to identify the maximum-tolerated dose (MTD) or the optimal biological dose (OBD) for noncytotoxic agents. However, the conventional 3+3 approach, known for its and poor performance, continues to be an attractive choice for many trials despite these alternative suggestions. The article seeks to underscore the importance of moving beyond the 3+3 design by highlighting a different key element in trial design: the estimation of sample size and its crucial role in predicting toxicity and determining the MTD. We use simulation studies to compare the performance of the most used phase I approaches: 3+3, Continual Reassessment Method (CRM), Keyboard and Bayesian Optimal Interval (BOIN) designs regarding three key operating characteristics: the percentage of correct selection of the true MTD, the average number of patients allocated per dose level, and the average total sample size. The simulation results consistently show that the 3+3 algorithm underperforms in comparison to model-based and model-assisted designs across all scenarios and metrics. The 3+3 method yields significantly lower (up to three times) probabilities in identifying the correct MTD, often selecting doses one or even two levels below the actual MTD. The 3+3 design allocates significantly fewer patients at the true MTD, assigns higher numbers to lower dose levels, and rarely explores doses above the target dose-limiting toxicity (DLT) rate. The overall performance of the 3+3 method is suboptimal, with a high level of unexplained uncertainty and significant implications for accurately determining the MTD. While the primary focus of the article is to demonstrate the limitations of the 3+3 algorithm, the question remains about the preferred alternative approach. The intention is not to definitively recommend one model-based or model-assisted method over others, as their performance can vary based on parameters and model specifications. However, the presented results indicate that the CRM, Keyboard, and BOIN designs consistently outperform the 3+3 and offer improved efficiency and precision in determining the MTD, which is crucial in early-phase clinical trials.","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-09DOI: 10.1177/17407745231221152
Lucie Biard, Anaïs Andrillon, Rebecca B Silva, Shing M Lee
Given that novel anticancer therapies have different toxicity profiles and mechanisms of action, it is important to reconsider the current approaches for dose selection. In an effort to move away from considering the maximum tolerated dose as the optimal dose, the Food and Drug Administration Project Optimus points to the need of incorporating long-term toxicity evaluation, given that many of these novel agents lead to late-onset or cumulative toxicities and there are no guidelines on how to handle them. Numerous methods have been proposed to handle late-onset toxicities in dose-finding clinical trials. A summary and comparison of these methods are provided. Moreover, using PI3K inhibitors as a case study, we show how late-onset toxicity can be integrated into the dose-optimization strategy using current available approaches. We illustrate a re-design of this trial to compare the approach to those that only consider early toxicity outcomes and disregard late-onset toxicities. We also provide proposals going forward for dose optimization in early development of novel anticancer agents with considerations for late-onset toxicities.
{"title":"Dose optimization for cancer treatments with considerations for late-onset toxicities","authors":"Lucie Biard, Anaïs Andrillon, Rebecca B Silva, Shing M Lee","doi":"10.1177/17407745231221152","DOIUrl":"https://doi.org/10.1177/17407745231221152","url":null,"abstract":"Given that novel anticancer therapies have different toxicity profiles and mechanisms of action, it is important to reconsider the current approaches for dose selection. In an effort to move away from considering the maximum tolerated dose as the optimal dose, the Food and Drug Administration Project Optimus points to the need of incorporating long-term toxicity evaluation, given that many of these novel agents lead to late-onset or cumulative toxicities and there are no guidelines on how to handle them. Numerous methods have been proposed to handle late-onset toxicities in dose-finding clinical trials. A summary and comparison of these methods are provided. Moreover, using PI3K inhibitors as a case study, we show how late-onset toxicity can be integrated into the dose-optimization strategy using current available approaches. We illustrate a re-design of this trial to compare the approach to those that only consider early toxicity outcomes and disregard late-onset toxicities. We also provide proposals going forward for dose optimization in early development of novel anticancer agents with considerations for late-onset toxicities.","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-09DOI: 10.1177/17407745241243027
Ellen Richmond, Goli Samimi, Margaret House, Leslie G Ford, Eva Szabo
BackgroundThe Early Phase Cancer Prevention Clinical Trials Program (Consortia), led by the Division of Cancer Prevention, National Cancer Institute, supports and conducts trials assessing safety, tolerability, and cancer preventive potential of a variety of interventions. Accrual to cancer prevention trials includes the recruitment of unaffected populations, posing unique challenges related to minimizing participant burden and risk, given the less evident or measurable benefits to individual participants. The Accrual Quality Improvement Program was developed to address these challenges and better understand the multiple determinants of accrual activity throughout the life of the trial. Through continuous monitoring of accrual data, Accrual Quality Improvement Program identifies positive and negative factors in real-time to optimize enrollment rates for ongoing and future trials.MethodsThe Accrual Quality Improvement Program provides a web-based centralized infrastructure for collecting, analyzing, visualizing, and storing qualitative and quantitative participant-, site-, and study-level data. The Accrual Quality Improvement Program approaches cancer prevention clinical trial accrual as multi-factorial, recognizing protocol design, potential participants’ characteristics, and individual site as well as study-wide implementation issues.ResultsThe Accrual Quality Improvement Program was used across 39 Consortia trials from 2014 to 2022 to collect comprehensive trial information. The Accrual Quality Improvement Program captures data at the participant level, including number of charts reviewed, potential participants contacted and reasons why participants were not eligible for contact or did not consent to the trial or start intervention. The Accrual Quality Improvement Program also captures site-level (e.g. staffing issues) and study-level (e.g. when protocol amendments are made) data at each step of the recruitment/enrollment process, from potential participant identification to contact, consent, intervention, and study completion using a Recruitment Journal. Accrual Quality Improvement Program’s functionality also includes tracking and visualization of a trial’s cumulative accrual rate compared to the projected accrual rate, including a zone-based performance rating with corresponding quality improvement intervention recommendations.ConclusionThe challenges associated with recruitment and timely completion of early phase cancer prevention clinical trials necessitate a data collection program capable of continuous collection and quality improvement. The Accrual Quality Improvement Program collects cumulative data across National Cancer Institute, Division of Cancer Prevention early phase clinical trials, providing the opportunity for real-time review of participant-, site-, and study-level data and thereby enables responsive recruitment strategy and protocol modifications for improved recruitment rates to ongoing trials. Of note, Accrual Quality I
{"title":"Accrual Quality Improvement Program for clinical trials","authors":"Ellen Richmond, Goli Samimi, Margaret House, Leslie G Ford, Eva Szabo","doi":"10.1177/17407745241243027","DOIUrl":"https://doi.org/10.1177/17407745241243027","url":null,"abstract":"BackgroundThe Early Phase Cancer Prevention Clinical Trials Program (Consortia), led by the Division of Cancer Prevention, National Cancer Institute, supports and conducts trials assessing safety, tolerability, and cancer preventive potential of a variety of interventions. Accrual to cancer prevention trials includes the recruitment of unaffected populations, posing unique challenges related to minimizing participant burden and risk, given the less evident or measurable benefits to individual participants. The Accrual Quality Improvement Program was developed to address these challenges and better understand the multiple determinants of accrual activity throughout the life of the trial. Through continuous monitoring of accrual data, Accrual Quality Improvement Program identifies positive and negative factors in real-time to optimize enrollment rates for ongoing and future trials.MethodsThe Accrual Quality Improvement Program provides a web-based centralized infrastructure for collecting, analyzing, visualizing, and storing qualitative and quantitative participant-, site-, and study-level data. The Accrual Quality Improvement Program approaches cancer prevention clinical trial accrual as multi-factorial, recognizing protocol design, potential participants’ characteristics, and individual site as well as study-wide implementation issues.ResultsThe Accrual Quality Improvement Program was used across 39 Consortia trials from 2014 to 2022 to collect comprehensive trial information. The Accrual Quality Improvement Program captures data at the participant level, including number of charts reviewed, potential participants contacted and reasons why participants were not eligible for contact or did not consent to the trial or start intervention. The Accrual Quality Improvement Program also captures site-level (e.g. staffing issues) and study-level (e.g. when protocol amendments are made) data at each step of the recruitment/enrollment process, from potential participant identification to contact, consent, intervention, and study completion using a Recruitment Journal. Accrual Quality Improvement Program’s functionality also includes tracking and visualization of a trial’s cumulative accrual rate compared to the projected accrual rate, including a zone-based performance rating with corresponding quality improvement intervention recommendations.ConclusionThe challenges associated with recruitment and timely completion of early phase cancer prevention clinical trials necessitate a data collection program capable of continuous collection and quality improvement. The Accrual Quality Improvement Program collects cumulative data across National Cancer Institute, Division of Cancer Prevention early phase clinical trials, providing the opportunity for real-time review of participant-, site-, and study-level data and thereby enables responsive recruitment strategy and protocol modifications for improved recruitment rates to ongoing trials. Of note, Accrual Quality I","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.1177/17407745241238444
Anna Kearney, Laura Butlin, Taylor Coffey, Thomas Conway, Sarah Cotterill, Alison Evans, Jackie Fox, Andrew Hunter, Sarah Inglis, Louise Murphy, Nurulamin M Noor, Terrie Walker-Smith, Carrol Gamble
BackgroundThe Online Resource for Recruitment in Clinical triAls (ORRCA) and the Online Resource for Retention in Clinical triAls (ORRCA2) were established to organise and map the literature addressing participant recruitment and retention within clinical research. The two databases are updated on an ongoing basis using separate but parallel systematic reviews. However, recruitment and retention of research participants is widely acknowledged to be interconnected. While interventions aimed at addressing recruitment challenges can impact retention and vice versa, it is not clear how well they are simultaneously considered within methodological research. This study aims to report the recent update of ORRCA and ORRCA2 with a special emphasis on assessing crossover of the databases and how frequently randomised studies of methodological interventions measure the impact on both recruitment and retention outcomes.MethodsTwo parallel systematic reviews were conducted in line with previously reported methods updating ORRCA (recruitment) and ORRCA2 (retention) with publications from 2018 and 2019. Articles were categorised according to their evidence type (randomised evaluation, non-randomised evaluation, application and observation) and against the recruitment and retention domain frameworks. Articles categorised as randomised evaluations were compared to identify studies appearing in both databases. For randomised studies that were only in one database, domain categories were used to assess whether the methodological intervention was likely to impact on the alternate construct. For example, whether a recruitment intervention might also impact retention.ResultsIn total, 806 of 17,767 articles screened for the recruitment database and 175 of 18,656 articles screened for the retention database were added as result of the update. Of these, 89 articles were classified as ‘randomised evaluation’, of which 6 were systematic reviews and 83 were randomised evaluations of methodological interventions. Ten of the randomised studies assessed recruitment and retention and were included in both databases. Of the randomised studies only in the recruitment database, 48/55 (87%) assessed the content or format of participant information which could have an impact on retention. Of the randomised studies only in the retention database, 6/18 (33%) assessed monetary incentives, 4/18 (22%) assessed data collection location and methods and 3/18 (17%) assessed non-monetary incentives, all of which could have an impact on recruitment.ConclusionOnly a small proportion of randomised studies of methodological interventions assessed the impact on both recruitment and retention despite having a potential impact on both outcomes. Where possible, an integrated approach analysing both constructs should be the new standard for these types of evaluations to ensure that improvements to recruitment are not at the expense of retention and vice versa.
{"title":"The overlap between randomised evaluations of recruitment and retention interventions: An updated review of recruitment (Online Resource for Recruitment in Clinical triAls) and retention (Online Resource for Retention in Clinical triAls) literature","authors":"Anna Kearney, Laura Butlin, Taylor Coffey, Thomas Conway, Sarah Cotterill, Alison Evans, Jackie Fox, Andrew Hunter, Sarah Inglis, Louise Murphy, Nurulamin M Noor, Terrie Walker-Smith, Carrol Gamble","doi":"10.1177/17407745241238444","DOIUrl":"https://doi.org/10.1177/17407745241238444","url":null,"abstract":"BackgroundThe Online Resource for Recruitment in Clinical triAls (ORRCA) and the Online Resource for Retention in Clinical triAls (ORRCA2) were established to organise and map the literature addressing participant recruitment and retention within clinical research. The two databases are updated on an ongoing basis using separate but parallel systematic reviews. However, recruitment and retention of research participants is widely acknowledged to be interconnected. While interventions aimed at addressing recruitment challenges can impact retention and vice versa, it is not clear how well they are simultaneously considered within methodological research. This study aims to report the recent update of ORRCA and ORRCA2 with a special emphasis on assessing crossover of the databases and how frequently randomised studies of methodological interventions measure the impact on both recruitment and retention outcomes.MethodsTwo parallel systematic reviews were conducted in line with previously reported methods updating ORRCA (recruitment) and ORRCA2 (retention) with publications from 2018 and 2019. Articles were categorised according to their evidence type (randomised evaluation, non-randomised evaluation, application and observation) and against the recruitment and retention domain frameworks. Articles categorised as randomised evaluations were compared to identify studies appearing in both databases. For randomised studies that were only in one database, domain categories were used to assess whether the methodological intervention was likely to impact on the alternate construct. For example, whether a recruitment intervention might also impact retention.ResultsIn total, 806 of 17,767 articles screened for the recruitment database and 175 of 18,656 articles screened for the retention database were added as result of the update. Of these, 89 articles were classified as ‘randomised evaluation’, of which 6 were systematic reviews and 83 were randomised evaluations of methodological interventions. Ten of the randomised studies assessed recruitment and retention and were included in both databases. Of the randomised studies only in the recruitment database, 48/55 (87%) assessed the content or format of participant information which could have an impact on retention. Of the randomised studies only in the retention database, 6/18 (33%) assessed monetary incentives, 4/18 (22%) assessed data collection location and methods and 3/18 (17%) assessed non-monetary incentives, all of which could have an impact on recruitment.ConclusionOnly a small proportion of randomised studies of methodological interventions assessed the impact on both recruitment and retention despite having a potential impact on both outcomes. Where possible, an integrated approach analysing both constructs should be the new standard for these types of evaluations to ensure that improvements to recruitment are not at the expense of retention and vice versa.","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}