Pub Date : 2025-01-02DOI: 10.1177/17407745241304114
Nicole D Agaronnik, Mary Linton B Peters, Lisa I Iezzoni
Background/aims: People with disability have higher rates of cancer, excluding skin cancer, compared with people without disability. Food and Drug Administration draft guidelines from 2024 address use of performance status criteria to determine eligibility for clinical trials, advocating for less restrictive thresholds. We examined the exclusion of people with disability from clinical trials based on performance status and other criteria.
Methods: We reviewed eligibility criteria in approved interventional Phase III and Phase IV oncology clinical trials listed on ClinicalTrails.gov between 1 January 2019 and 31 December 2023. Functional status thresholds were assessed using the Eastern Cooperative Oncology Group Performance Status Scale and Karnofsky Performance Scale in clinical trial eligibility criteria. Qualitative analysis was used to review eligibility criteria relating to functional impairments or disability.
Results: Among 96 oncology clinical trials, approximately 40% had restrictive Eastern Cooperative Oncology Group and Karnofsky Performance Scale thresholds, explicitly including only patients with Eastern Cooperative Oncology Group 0 or 1, or equivalent Karnofsky Performance Scale 70 or greater. Only 20% of studies included patients with Eastern Cooperative Oncology Group 2 and Karnofsky Performance Scale 60. Multiple studies contained miscellaneous eligibility criteria that could potentially exclude people with disability. No studies described making accommodations for people with disability to participate in the clinical trial.
Conclusion: Draft Food and Drug Administration guidelines recommend including patients with Eastern Cooperative Oncology Group scores of 2 and Karnofsky Performance Scale scores of 60 in oncology clinical trials. We found that oncology clinical trials often exclude people with more restrictive performance status scores than the draft Food and Drug Administration guidelines, as well as other criteria that relate to disability. These estimates provide baseline information for assessing how the 2024 Food and Drug Administration guidance, if finalized, might affect the inclusion of people with disability in future trials.
{"title":"Exclusion of people from oncology clinical trials based on functional status.","authors":"Nicole D Agaronnik, Mary Linton B Peters, Lisa I Iezzoni","doi":"10.1177/17407745241304114","DOIUrl":"https://doi.org/10.1177/17407745241304114","url":null,"abstract":"<p><strong>Background/aims: </strong>People with disability have higher rates of cancer, excluding skin cancer, compared with people without disability. Food and Drug Administration draft guidelines from 2024 address use of performance status criteria to determine eligibility for clinical trials, advocating for less restrictive thresholds. We examined the exclusion of people with disability from clinical trials based on performance status and other criteria.</p><p><strong>Methods: </strong>We reviewed eligibility criteria in approved interventional Phase III and Phase IV oncology clinical trials listed on ClinicalTrails.gov between 1 January 2019 and 31 December 2023. Functional status thresholds were assessed using the Eastern Cooperative Oncology Group Performance Status Scale and Karnofsky Performance Scale in clinical trial eligibility criteria. Qualitative analysis was used to review eligibility criteria relating to functional impairments or disability.</p><p><strong>Results: </strong>Among 96 oncology clinical trials, approximately 40% had restrictive Eastern Cooperative Oncology Group and Karnofsky Performance Scale thresholds, explicitly including only patients with Eastern Cooperative Oncology Group 0 or 1, or equivalent Karnofsky Performance Scale 70 or greater. Only 20% of studies included patients with Eastern Cooperative Oncology Group 2 and Karnofsky Performance Scale 60. Multiple studies contained miscellaneous eligibility criteria that could potentially exclude people with disability. No studies described making accommodations for people with disability to participate in the clinical trial.</p><p><strong>Conclusion: </strong>Draft Food and Drug Administration guidelines recommend including patients with Eastern Cooperative Oncology Group scores of 2 and Karnofsky Performance Scale scores of 60 in oncology clinical trials. We found that oncology clinical trials often exclude people with more restrictive performance status scores than the draft Food and Drug Administration guidelines, as well as other criteria that relate to disability. These estimates provide baseline information for assessing how the 2024 Food and Drug Administration guidance, if finalized, might affect the inclusion of people with disability in future trials.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"17407745241304114"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142913834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-31DOI: 10.1177/17407745241304721
Sheila Kansiime, Christian Holm Hansen, Eugene Ruzagira, Sheena McCormack, Richard Hayes, David Dunn
<p><strong>Background: </strong>There is increasing recognition that the interpretation of active-controlled HIV prevention trials should consider the counterfactual placebo HIV incidence rate, that is, the rate that would have been observed if the trial had included a placebo control arm. The PrEPVacc HIV vaccine and pre-exposure prophylaxis trial (NCT04066881) incorporated a pre-trial registration cohort partly for this purpose. In this article, we describe our attempts to model the counterfactual placebo HIV incidence rate from the registration cohort.</p><p><strong>Methods: </strong>PrEPVacc was conducted at four study sites in three African countries. During the set up of the trial, potential participants were invited to join a registration cohort, which included HIV testing every 3 months. The trial included a non-inferiority comparison of two daily, oral pre-exposure prophylaxis regimens (emtricitabine/tenofovir disoproxil fumarate, emtricitabine/tenofovir alafenamide fumarate), administered for a target duration of 26 weeks (until 2 weeks after the third of four vaccinations). We developed a multi-variable Poisson regression model to estimate associations in the registration cohort between HIV incidence and baseline predictors (socio-demographic and behavioural variables) and time-dependent predictors (calendar time, time in follow-up). We then used the estimated regression coefficients together with participant characteristics in the active-controlled pre-exposure prophylaxis trial to predict a counterfactual placebo incidence rate. Sensitivity analyses regarding the effect of calendar period were conducted.</p><p><strong>Results: </strong>A total of 3255 participants were followed up in the registration cohort between July 2018 and October 2022, and 1512 participants were enrolled in the trial between December 2020 and March 2023. In the registration cohort, 106 participants were diagnosed with HIV over 3638 person-years of follow-up (incidence rate = 2.9/100 person-years of follow-up (95% confidence interval: 2.4-3.5)). The final statistical model included terms for study site, gender, age, occupation, sex after using recreational drugs, time in follow-up, and calendar period. The estimated effect of calendar period was very strong, an overall 37% (95% confidence interval: 19-51) decline per year in adjusted analyses, with evidence that this effect varied by study site. In sensitivity analyses investigating different assumptions about the precise effect of calendar period, the predicted counterfactual placebo incidence ranged between 1.2 and 3.7/100 person-years of follow-up.</p><p><strong>Conclusion: </strong>In principle, the use of a registration cohort is one of the most straightforward and reliable methods for estimating the counterfactual placebo HIV incidence. However, the predictions in PrEPVacc are complicated by an implausibly large calendar time effect, with uncertainty as to whether this can be validly extrapolated over the
{"title":"Challenges in estimating the counterfactual placebo HIV incidence rate from a registration cohort: The PrEPVacc trial.","authors":"Sheila Kansiime, Christian Holm Hansen, Eugene Ruzagira, Sheena McCormack, Richard Hayes, David Dunn","doi":"10.1177/17407745241304721","DOIUrl":"10.1177/17407745241304721","url":null,"abstract":"<p><strong>Background: </strong>There is increasing recognition that the interpretation of active-controlled HIV prevention trials should consider the counterfactual placebo HIV incidence rate, that is, the rate that would have been observed if the trial had included a placebo control arm. The PrEPVacc HIV vaccine and pre-exposure prophylaxis trial (NCT04066881) incorporated a pre-trial registration cohort partly for this purpose. In this article, we describe our attempts to model the counterfactual placebo HIV incidence rate from the registration cohort.</p><p><strong>Methods: </strong>PrEPVacc was conducted at four study sites in three African countries. During the set up of the trial, potential participants were invited to join a registration cohort, which included HIV testing every 3 months. The trial included a non-inferiority comparison of two daily, oral pre-exposure prophylaxis regimens (emtricitabine/tenofovir disoproxil fumarate, emtricitabine/tenofovir alafenamide fumarate), administered for a target duration of 26 weeks (until 2 weeks after the third of four vaccinations). We developed a multi-variable Poisson regression model to estimate associations in the registration cohort between HIV incidence and baseline predictors (socio-demographic and behavioural variables) and time-dependent predictors (calendar time, time in follow-up). We then used the estimated regression coefficients together with participant characteristics in the active-controlled pre-exposure prophylaxis trial to predict a counterfactual placebo incidence rate. Sensitivity analyses regarding the effect of calendar period were conducted.</p><p><strong>Results: </strong>A total of 3255 participants were followed up in the registration cohort between July 2018 and October 2022, and 1512 participants were enrolled in the trial between December 2020 and March 2023. In the registration cohort, 106 participants were diagnosed with HIV over 3638 person-years of follow-up (incidence rate = 2.9/100 person-years of follow-up (95% confidence interval: 2.4-3.5)). The final statistical model included terms for study site, gender, age, occupation, sex after using recreational drugs, time in follow-up, and calendar period. The estimated effect of calendar period was very strong, an overall 37% (95% confidence interval: 19-51) decline per year in adjusted analyses, with evidence that this effect varied by study site. In sensitivity analyses investigating different assumptions about the precise effect of calendar period, the predicted counterfactual placebo incidence ranged between 1.2 and 3.7/100 person-years of follow-up.</p><p><strong>Conclusion: </strong>In principle, the use of a registration cohort is one of the most straightforward and reliable methods for estimating the counterfactual placebo HIV incidence. However, the predictions in PrEPVacc are complicated by an implausibly large calendar time effect, with uncertainty as to whether this can be validly extrapolated over the","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"17407745241304721"},"PeriodicalIF":2.2,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143028103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-25DOI: 10.1177/17407745241304331
Umar Niazi, Charlotte Stuart, Patricia Soares, Vincent Foure, Gareth Griffiths
Unlocking the power of personalised medicine in oncology hinges on the integration of clinical trial data with translational data (i.e. biospecimen-derived molecular information). This combined analysis allows researchers to tailor treatments to a patient's unique biological makeup. However, current practices within UK Clinical Trials Units present challenges. While clinical data are held in standardised formats, translational data are complex, diverse, and requires specialised storage. This disparity in format creates significant hurdles for researchers aiming to curate, integrate and analyse these datasets effectively. This article proposes a novel solution: an open-source SQL database schema designed specifically for the needs of academic trial units. Inspired by Cancer Research UK's commitment to open data sharing and exemplified by the Southampton Clinical Trials Unit's CONFIRM trial (with over 150,000 clinical data points), this schema offers a cost-effective and practical 'middle ground' between raw data and expensive Secure Data Environments/Trusted Research Environments. By acting as a central hub for both clinical and translational data, the schema facilitates seamless data sharing and analysis. Researchers gain a holistic view of trials, enabling exploration of connections between clinical observations and the molecular underpinnings of treatment response. Detailed instructions for setting up the database are provided. The open-source nature and straightforward design ensure ease of implementation and affordability, while robust security measures safeguard sensitive data. We further showcase how researchers can leverage popular statistical software like R to directly query the database. This approach fosters collaboration within the academic discovery community, ultimately accelerating progress towards personalised cancer therapies.
{"title":"An open-source SQL database schema for integrated clinical and translational data management in clinical trials.","authors":"Umar Niazi, Charlotte Stuart, Patricia Soares, Vincent Foure, Gareth Griffiths","doi":"10.1177/17407745241304331","DOIUrl":"https://doi.org/10.1177/17407745241304331","url":null,"abstract":"<p><p>Unlocking the power of personalised medicine in oncology hinges on the integration of clinical trial data with translational data (i.e. biospecimen-derived molecular information). This combined analysis allows researchers to tailor treatments to a patient's unique biological makeup. However, current practices within UK Clinical Trials Units present challenges. While clinical data are held in standardised formats, translational data are complex, diverse, and requires specialised storage. This disparity in format creates significant hurdles for researchers aiming to curate, integrate and analyse these datasets effectively. This article proposes a novel solution: an open-source SQL database schema designed specifically for the needs of academic trial units. Inspired by Cancer Research UK's commitment to open data sharing and exemplified by the Southampton Clinical Trials Unit's CONFIRM trial (with over 150,000 clinical data points), this schema offers a cost-effective and practical 'middle ground' between raw data and expensive Secure Data Environments/Trusted Research Environments. By acting as a central hub for both clinical and translational data, the schema facilitates seamless data sharing and analysis. Researchers gain a holistic view of trials, enabling exploration of connections between clinical observations and the molecular underpinnings of treatment response. Detailed instructions for setting up the database are provided. The open-source nature and straightforward design ensure ease of implementation and affordability, while robust security measures safeguard sensitive data. We further showcase how researchers can leverage popular statistical software like R to directly query the database. This approach fosters collaboration within the academic discovery community, ultimately accelerating progress towards personalised cancer therapies.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"17407745241304331"},"PeriodicalIF":2.2,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142884787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-18DOI: 10.1177/17407745241297162
Christian J Wiedermann
Background: Enoximone, a phosphodiesterase III inhibitor, was approved in Germany in 1989 and initially proposed for heart failure and perioperative cardiac conditions. The research of Joachim Boldt and his supervisor Gunter Hempelmann came under scrutiny after investigations revealed systematic scientific misconduct leading to numerous retractions. Therefore, early enoximone studies by Boldt and Hempelmann from 1988 to 1991 were reviewed.
Methods: PubMed-listed publications and dissertations on enoximone from the Justus-Liebig-University of Giessen were analyzed for study design, participant demographics, methods, and outcomes. The data were screened for duplications and inconsistencies.
Results: Of seven randomized controlled trial articles identified, two were retracted. Five of the non-retracted articles reported similarly designed studies and included similar patient cohorts. The analysis revealed overlap in patient demographics and reported outcomes and inconsistencies in cardiac index values between trials, suggesting data duplication and manipulation. Several other articles have been retracted. The analysis results of misconduct and co-authorship of retracted studies during Boldt's late formative years indicate inadequate mentorship. The university's slow response in supporting the retraction of publications involving scientific misconduct indicates systemic oversight problems.
Conclusion: All five publications analyzed remained active and warrant retraction to maintain the integrity of the scientific record. This analysis highlights the need for improved institutional supervision. The current guidelines of the Committee on Publication Ethics for retraction are inadequate for large-scale scientific misconduct. Comprehensive ethics training, regular audits, and transparent reporting are essential to ensure the credibility of published research.
背景:Enoximone是一种磷酸二酯酶III抑制剂,于1989年在德国被批准,最初用于心力衰竭和围手术期心脏病。Joachim Boldt和他的导师Gunter Hempelmann的研究在调查发现系统性的科学不端行为导致大量撤回后受到审查。因此,本文回顾了Boldt和Hempelmann从1988年到1991年的早期依诺西酮研究。方法:对来自德国吉森大学(Justus-Liebig-University of Giessen)的pubmed收录的关于依诺西蒙的出版物和论文进行研究设计、参与者人口统计、方法和结果分析。对数据进行了重复和不一致的筛选。结果:在确定的7篇随机对照试验文章中,2篇被撤回。未撤回的文章中有5篇报道了类似设计的研究,包括类似的患者队列。分析显示患者人口统计学和报告的结果重叠,试验之间心脏指数值不一致,表明数据重复和操纵。其他几篇文章也被撤回。在Boldt的后期形成时期,不当行为和撤回研究的共同作者的分析结果表明指导不足。该大学在支持撤回涉及科学不端行为的出版物方面反应迟缓,这表明系统监管存在问题。结论:所分析的所有五篇出版物都保持活跃,值得撤回,以保持科学记录的完整性。这一分析强调了改善机构监督的必要性。出版伦理委员会目前关于撤稿的指导方针不足以应对大规模的科学不端行为。全面的伦理培训、定期审计和透明的报告对于确保已发表研究的可信度至关重要。
{"title":"Assessing institutional responsibility in scientific misconduct: A case study of enoximone research by Joachim Boldt.","authors":"Christian J Wiedermann","doi":"10.1177/17407745241297162","DOIUrl":"https://doi.org/10.1177/17407745241297162","url":null,"abstract":"<p><strong>Background: </strong>Enoximone, a phosphodiesterase III inhibitor, was approved in Germany in 1989 and initially proposed for heart failure and perioperative cardiac conditions. The research of Joachim Boldt and his supervisor Gunter Hempelmann came under scrutiny after investigations revealed systematic scientific misconduct leading to numerous retractions. Therefore, early enoximone studies by Boldt and Hempelmann from 1988 to 1991 were reviewed.</p><p><strong>Methods: </strong>PubMed-listed publications and dissertations on enoximone from the Justus-Liebig-University of Giessen were analyzed for study design, participant demographics, methods, and outcomes. The data were screened for duplications and inconsistencies.</p><p><strong>Results: </strong>Of seven randomized controlled trial articles identified, two were retracted. Five of the non-retracted articles reported similarly designed studies and included similar patient cohorts. The analysis revealed overlap in patient demographics and reported outcomes and inconsistencies in cardiac index values between trials, suggesting data duplication and manipulation. Several other articles have been retracted. The analysis results of misconduct and co-authorship of retracted studies during Boldt's late formative years indicate inadequate mentorship. The university's slow response in supporting the retraction of publications involving scientific misconduct indicates systemic oversight problems.</p><p><strong>Conclusion: </strong>All five publications analyzed remained active and warrant retraction to maintain the integrity of the scientific record. This analysis highlights the need for improved institutional supervision. The current guidelines of the Committee on Publication Ethics for retraction are inadequate for large-scale scientific misconduct. Comprehensive ethics training, regular audits, and transparent reporting are essential to ensure the credibility of published research.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"17407745241297162"},"PeriodicalIF":2.2,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-05-21DOI: 10.1177/17407745241251780
Richard Hooper, Olivier Quintin, Jessica Kasza
<p><strong>Background/aims: </strong>The standard approach to designing stepped wedge trials that recruit participants in a continuous stream is to divide time into periods of equal length. But the choice of design in such cases is infinitely more flexible: each cluster could cross from the control to the intervention at any point on the continuous time-scale. We consider the case of a stepped wedge design with clusters randomised to just three sequences (designs with small numbers of sequences may be preferred for their simplicity and practicality) and investigate the choice of design that minimises the variance of the treatment effect estimator under different assumptions about the intra-cluster correlation.</p><p><strong>Methods: </strong>We make some simplifying assumptions in order to calculate the variance: in particular that we recruit the same number of participants, <math><mrow><mi>m</mi></mrow></math>, from each cluster over the course of the trial, and that participants present at regularly spaced intervals. We consider an intra-cluster correlation that decays exponentially with separation in time between the presentation of two individuals from the same cluster, from a value of <math><mrow><mi>ρ</mi></mrow></math> for two individuals who present at the same time, to a value of <math><mrow><mi>ρ</mi><mi>τ</mi></mrow></math> for individuals presenting at the start and end of the trial recruitment interval. We restrict attention to three-sequence designs with centrosymmetry - the property that if we reverse time and swap the intervention and control conditions then the design looks the same. We obtain an expression for the variance of the treatment effect estimator adjusted for effects of time, using methods for generalised least squares estimation, and we evaluate this expression numerically for different designs, and for different parameter values.</p><p><strong>Results: </strong>There is a two-dimensional space of possible three-sequence, centrosymmetric stepped wedge designs with continuous recruitment. The variance of the treatment effect estimator for given <math><mrow><mi>ρ</mi></mrow></math> and <math><mrow><mi>τ</mi></mrow></math> can be plotted as a contour map over this space. The shape of this variance surface depends on <math><mrow><mi>τ</mi></mrow></math> and on the parameter <math><mrow><mi>m</mi><mi>ρ</mi><mo>/</mo><mo>(</mo><mn>1</mn><mo>-</mo><mi>ρ</mi><mo>)</mo></mrow></math>, but typically indicates a broad, flat region of close-to-optimal designs. The 'standard' design with equally spaced periods and 1:1:1 allocation rarely performs well, however.</p><p><strong>Conclusions: </strong>In many different settings, a relatively simple design can be found (e.g. one based on simple fractions) that offers close-to-optimal efficiency in that setting. There may also be designs that are robustly efficient over a wide range of settings. Contour maps of the kind we illustrate can help guide this choice. If efficiency is offered a
{"title":"Efficient designs for three-sequence stepped wedge trials with continuous recruitment.","authors":"Richard Hooper, Olivier Quintin, Jessica Kasza","doi":"10.1177/17407745241251780","DOIUrl":"10.1177/17407745241251780","url":null,"abstract":"<p><strong>Background/aims: </strong>The standard approach to designing stepped wedge trials that recruit participants in a continuous stream is to divide time into periods of equal length. But the choice of design in such cases is infinitely more flexible: each cluster could cross from the control to the intervention at any point on the continuous time-scale. We consider the case of a stepped wedge design with clusters randomised to just three sequences (designs with small numbers of sequences may be preferred for their simplicity and practicality) and investigate the choice of design that minimises the variance of the treatment effect estimator under different assumptions about the intra-cluster correlation.</p><p><strong>Methods: </strong>We make some simplifying assumptions in order to calculate the variance: in particular that we recruit the same number of participants, <math><mrow><mi>m</mi></mrow></math>, from each cluster over the course of the trial, and that participants present at regularly spaced intervals. We consider an intra-cluster correlation that decays exponentially with separation in time between the presentation of two individuals from the same cluster, from a value of <math><mrow><mi>ρ</mi></mrow></math> for two individuals who present at the same time, to a value of <math><mrow><mi>ρ</mi><mi>τ</mi></mrow></math> for individuals presenting at the start and end of the trial recruitment interval. We restrict attention to three-sequence designs with centrosymmetry - the property that if we reverse time and swap the intervention and control conditions then the design looks the same. We obtain an expression for the variance of the treatment effect estimator adjusted for effects of time, using methods for generalised least squares estimation, and we evaluate this expression numerically for different designs, and for different parameter values.</p><p><strong>Results: </strong>There is a two-dimensional space of possible three-sequence, centrosymmetric stepped wedge designs with continuous recruitment. The variance of the treatment effect estimator for given <math><mrow><mi>ρ</mi></mrow></math> and <math><mrow><mi>τ</mi></mrow></math> can be plotted as a contour map over this space. The shape of this variance surface depends on <math><mrow><mi>τ</mi></mrow></math> and on the parameter <math><mrow><mi>m</mi><mi>ρ</mi><mo>/</mo><mo>(</mo><mn>1</mn><mo>-</mo><mi>ρ</mi><mo>)</mo></mrow></math>, but typically indicates a broad, flat region of close-to-optimal designs. The 'standard' design with equally spaced periods and 1:1:1 allocation rarely performs well, however.</p><p><strong>Conclusions: </strong>In many different settings, a relatively simple design can be found (e.g. one based on simple fractions) that offers close-to-optimal efficiency in that setting. There may also be designs that are robustly efficient over a wide range of settings. Contour maps of the kind we illustrate can help guide this choice. If efficiency is offered a","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"723-733"},"PeriodicalIF":2.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11528865/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141074965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-03-29DOI: 10.1177/17407745241238393
David L DeMets, Susan Halabi, Lehana Thabane, Janet Wittes
{"title":"Society for Clinical Trials Data Monitoring Committee initiative website: Closing the gap.","authors":"David L DeMets, Susan Halabi, Lehana Thabane, Janet Wittes","doi":"10.1177/17407745241238393","DOIUrl":"10.1177/17407745241238393","url":null,"abstract":"","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"763-764"},"PeriodicalIF":2.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140326457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-05-16DOI: 10.1177/17407745241247334
Ziming Chen, Jeffrey S Berger, Lana A Castellucci, Michael Farkouh, Ewan C Goligher, Erinn M Hade, Beverley J Hunt, Lucy Z Kornblith, Patrick R Lawler, Eric S Leifer, Elizabeth Lorenzi, Matthew D Neal, Ryan Zarychanski, Anna Heath
<p><strong>Background: </strong>Clinical trials are increasingly using Bayesian methods for their design and analysis. Inference in Bayesian trials typically uses simulation-based approaches such as Markov Chain Monte Carlo methods. Markov Chain Monte Carlo has high computational cost and can be complex to implement. The Integrated Nested Laplace Approximations algorithm provides approximate Bayesian inference without the need for computationally complex simulations, making it more efficient than Markov Chain Monte Carlo. The practical properties of Integrated Nested Laplace Approximations compared to Markov Chain Monte Carlo have not been considered for clinical trials. Using data from a published clinical trial, we aim to investigate whether Integrated Nested Laplace Approximations is a feasible and accurate alternative to Markov Chain Monte Carlo and provide practical guidance for trialists interested in Bayesian trial design.</p><p><strong>Methods: </strong>Data from an international Bayesian multi-platform adaptive trial that compared therapeutic-dose anticoagulation with heparin to usual care in non-critically ill patients hospitalized for COVID-19 were used to fit Bayesian hierarchical generalized mixed models. Integrated Nested Laplace Approximations was compared to two Markov Chain Monte Carlo algorithms, implemented in the software JAGS and stan, using packages available in the statistical software R. Seven outcomes were analysed: organ-support free days (an ordinal outcome), five binary outcomes related to survival and length of hospital stay, and a time-to-event outcome. The posterior distributions for the treatment and sex effects and the variances for the hierarchical effects of age, site and time period were obtained. We summarized these posteriors by calculating the mean, standard deviations and the 95% equitailed credible intervals and presenting the results graphically. The computation time for each algorithm was recorded.</p><p><strong>Results: </strong>The average overlap of the 95% credible interval for the treatment and sex effects estimated using Integrated Nested Laplace Approximations was 96% and 97.6% compared with stan, respectively. The graphical posterior densities for these effects overlapped for all three algorithms. The posterior mean for the variance of the hierarchical effects of age, site and time estimated using Integrated Nested Laplace Approximations are within the 95% credible interval estimated using Markov Chain Monte Carlo but the average overlap of the credible interval is lower, 77%, 85.6% and 91.3%, respectively, for Integrated Nested Laplace Approximations compared to stan. Integrated Nested Laplace Approximations and stan were easily implemented in clear, well-established packages in R, while JAGS required the direct specification of the model. Integrated Nested Laplace Approximations was between 85 and 269 times faster than stan and 26 and 1852 times faster than JAGS.</p><p><strong>Conclusion: </str
背景:临床试验越来越多地使用贝叶斯方法进行设计和分析。贝叶斯试验推断通常使用基于模拟的方法,如马尔可夫链蒙特卡罗方法。马尔可夫链蒙特卡洛的计算成本很高,实施起来也很复杂。集成嵌套拉普拉斯逼近算法提供了近似贝叶斯推断,无需复杂的模拟计算,因此比马尔可夫链蒙特卡罗法更有效。与马尔可夫链蒙特卡洛相比,集成嵌套拉普拉斯逼近算法的实际特性尚未考虑用于临床试验。利用已发表的临床试验数据,我们旨在研究集成嵌套拉普拉斯逼近法是否是马尔可夫链蒙特卡罗的可行且准确的替代方法,并为对贝叶斯试验设计感兴趣的试验人员提供实用指导:一项国际贝叶斯多平台适应性试验对COVID-19住院非危重病人的肝素治疗剂量抗凝与常规护理进行了比较,试验数据被用来拟合贝叶斯分层广义混合模型。分析了七种结果:无器官支持天数(一种序数结果)、五种与生存和住院时间相关的二元结果以及一种时间到事件结果。我们获得了治疗效应和性别效应的后验分布,以及年龄、发病部位和时间段等分层效应的方差。我们通过计算平均值、标准差和 95% 的等效可信区间对这些后验进行了总结,并以图表的形式展示了结果。我们还记录了每种算法的计算时间:结果:使用整合嵌套拉普拉斯逼近法估计的治疗效应和性别效应的 95% 可信区间的平均重叠率分别为 96% 和 97.6%。所有三种算法对这些效应的图形后验密度都有重叠。使用综合嵌套拉普拉斯逼近法估计的年龄、地点和时间分层效应方差的后验均值在使用马尔可夫链蒙特卡罗估计的 95% 可信区间内,但与 stan 相比,综合嵌套拉普拉斯逼近法可信区间的平均重叠率较低,分别为 77%、85.6% 和 91.3%。集成嵌套拉普拉斯逼近法和 stan 很容易用 R 中清晰、成熟的软件包实现,而 JAGS 则需要直接指定模型。集成嵌套拉普拉斯近似法比 stan 快 85 到 269 倍,比 JAGS 快 26 到 1852 倍:集成嵌套拉普拉斯逼近法可以降低临床试验中贝叶斯分析的计算复杂度,因为它很容易在 R 中实现,比 JAGS 和 stan 中实现的马尔可夫链蒙特卡罗方法快得多,而且对治疗效果的后验分布提供了几乎相同的近似值。集成嵌套拉普拉斯近似法在估计分层效应方差的后验分布时不太准确,特别是在比例几率模型中,未来的工作应确定是否可以调整集成嵌套拉普拉斯近似法算法以改进这种估计。
{"title":"A comparison of computational algorithms for the Bayesian analysis of clinical trials.","authors":"Ziming Chen, Jeffrey S Berger, Lana A Castellucci, Michael Farkouh, Ewan C Goligher, Erinn M Hade, Beverley J Hunt, Lucy Z Kornblith, Patrick R Lawler, Eric S Leifer, Elizabeth Lorenzi, Matthew D Neal, Ryan Zarychanski, Anna Heath","doi":"10.1177/17407745241247334","DOIUrl":"10.1177/17407745241247334","url":null,"abstract":"<p><strong>Background: </strong>Clinical trials are increasingly using Bayesian methods for their design and analysis. Inference in Bayesian trials typically uses simulation-based approaches such as Markov Chain Monte Carlo methods. Markov Chain Monte Carlo has high computational cost and can be complex to implement. The Integrated Nested Laplace Approximations algorithm provides approximate Bayesian inference without the need for computationally complex simulations, making it more efficient than Markov Chain Monte Carlo. The practical properties of Integrated Nested Laplace Approximations compared to Markov Chain Monte Carlo have not been considered for clinical trials. Using data from a published clinical trial, we aim to investigate whether Integrated Nested Laplace Approximations is a feasible and accurate alternative to Markov Chain Monte Carlo and provide practical guidance for trialists interested in Bayesian trial design.</p><p><strong>Methods: </strong>Data from an international Bayesian multi-platform adaptive trial that compared therapeutic-dose anticoagulation with heparin to usual care in non-critically ill patients hospitalized for COVID-19 were used to fit Bayesian hierarchical generalized mixed models. Integrated Nested Laplace Approximations was compared to two Markov Chain Monte Carlo algorithms, implemented in the software JAGS and stan, using packages available in the statistical software R. Seven outcomes were analysed: organ-support free days (an ordinal outcome), five binary outcomes related to survival and length of hospital stay, and a time-to-event outcome. The posterior distributions for the treatment and sex effects and the variances for the hierarchical effects of age, site and time period were obtained. We summarized these posteriors by calculating the mean, standard deviations and the 95% equitailed credible intervals and presenting the results graphically. The computation time for each algorithm was recorded.</p><p><strong>Results: </strong>The average overlap of the 95% credible interval for the treatment and sex effects estimated using Integrated Nested Laplace Approximations was 96% and 97.6% compared with stan, respectively. The graphical posterior densities for these effects overlapped for all three algorithms. The posterior mean for the variance of the hierarchical effects of age, site and time estimated using Integrated Nested Laplace Approximations are within the 95% credible interval estimated using Markov Chain Monte Carlo but the average overlap of the credible interval is lower, 77%, 85.6% and 91.3%, respectively, for Integrated Nested Laplace Approximations compared to stan. Integrated Nested Laplace Approximations and stan were easily implemented in clear, well-established packages in R, while JAGS required the direct specification of the model. Integrated Nested Laplace Approximations was between 85 and 269 times faster than stan and 26 and 1852 times faster than JAGS.</p><p><strong>Conclusion: </str","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"689-700"},"PeriodicalIF":2.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11530324/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140944214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-08-15DOI: 10.1177/17407745241266168
Jonathan Kimmelman
{"title":"Commentary on Astrachan et al. The transmutation of research risk in pragmatic clinical trials.","authors":"Jonathan Kimmelman","doi":"10.1177/17407745241266168","DOIUrl":"10.1177/17407745241266168","url":null,"abstract":"","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"666-668"},"PeriodicalIF":2.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141987543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-05-17DOI: 10.1177/17407745241244801
Jungnam Joo, Eric S Leifer, Michael A Proschan, James F Troendle, Harmony R Reynolds, Erinn A Hade, Patrick R Lawler, Dong-Yun Kim, Nancy L Geller
Background: The coronavirus disease 2019 pandemic highlighted the need to conduct efficient randomized clinical trials with interim monitoring guidelines for efficacy and futility. Several randomized coronavirus disease 2019 trials, including the Multiplatform Randomized Clinical Trial (mpRCT), used Bayesian guidelines with the belief that they would lead to quicker efficacy or futility decisions than traditional "frequentist" guidelines, such as spending functions and conditional power. We explore this belief using an intuitive interpretation of Bayesian methods as translating prior opinion about the treatment effect into imaginary prior data. These imaginary observations are then combined with actual observations from the trial to make conclusions. Using this approach, we show that the Bayesian efficacy boundary used in mpRCT is actually quite similar to the frequentist Pocock boundary.
Methods: The mpRCT's efficacy monitoring guideline considered stopping if, given the observed data, there was greater than 99% probability that the treatment was effective (odds ratio greater than 1). The mpRCT's futility monitoring guideline considered stopping if, given the observed data, there was greater than 95% probability that the treatment was less than 20% effective (odds ratio less than 1.2). The mpRCT used a normal prior distribution that can be thought of as supplementing the actual patients' data with imaginary patients' data. We explore the effects of varying probability thresholds and the prior-to-actual patient ratio in the mpRCT and compare the resulting Bayesian efficacy monitoring guidelines to the well-known frequentist Pocock and O'Brien-Fleming efficacy guidelines. We also contrast Bayesian futility guidelines with a more traditional 20% conditional power futility guideline.
Results: A Bayesian efficacy and futility monitoring boundary using a neutral, weakly informative prior distribution and a fixed probability threshold at all interim analyses is more aggressive than the commonly used O'Brien-Fleming efficacy boundary coupled with a 20% conditional power threshold for futility. The trade-off is that more aggressive boundaries tend to stop trials earlier, but incur a loss of power. Interestingly, the Bayesian efficacy boundary with 99% probability threshold is very similar to the classic Pocock efficacy boundary.
Conclusions: In a pandemic where quickly weeding out ineffective treatments and identifying effective treatments is paramount, aggressive monitoring may be preferred to conservative approaches, such as the O'Brien-Fleming boundary. This can be accomplished with either Bayesian or frequentist methods.
{"title":"Comparison of Bayesian and frequentist monitoring boundaries motivated by the Multiplatform Randomized Clinical Trial.","authors":"Jungnam Joo, Eric S Leifer, Michael A Proschan, James F Troendle, Harmony R Reynolds, Erinn A Hade, Patrick R Lawler, Dong-Yun Kim, Nancy L Geller","doi":"10.1177/17407745241244801","DOIUrl":"10.1177/17407745241244801","url":null,"abstract":"<p><strong>Background: </strong>The coronavirus disease 2019 pandemic highlighted the need to conduct efficient randomized clinical trials with interim monitoring guidelines for efficacy and futility. Several randomized coronavirus disease 2019 trials, including the Multiplatform Randomized Clinical Trial (mpRCT), used Bayesian guidelines with the belief that they would lead to quicker efficacy or futility decisions than traditional \"frequentist\" guidelines, such as spending functions and conditional power. We explore this belief using an intuitive interpretation of Bayesian methods as translating prior opinion about the treatment effect into imaginary prior data. These imaginary observations are then combined with actual observations from the trial to make conclusions. Using this approach, we show that the Bayesian efficacy boundary used in mpRCT is actually quite similar to the frequentist Pocock boundary.</p><p><strong>Methods: </strong>The mpRCT's efficacy monitoring guideline considered stopping if, given the observed data, there was greater than 99% probability that the treatment was effective (odds ratio greater than 1). The mpRCT's futility monitoring guideline considered stopping if, given the observed data, there was greater than 95% probability that the treatment was less than 20% effective (odds ratio less than 1.2). The mpRCT used a normal prior distribution that can be thought of as supplementing the actual patients' data with imaginary patients' data. We explore the effects of varying probability thresholds and the prior-to-actual patient ratio in the mpRCT and compare the resulting Bayesian efficacy monitoring guidelines to the well-known frequentist Pocock and O'Brien-Fleming efficacy guidelines. We also contrast Bayesian futility guidelines with a more traditional 20% conditional power futility guideline.</p><p><strong>Results: </strong>A Bayesian efficacy and futility monitoring boundary using a neutral, weakly informative prior distribution and a fixed probability threshold at all interim analyses is more aggressive than the commonly used O'Brien-Fleming efficacy boundary coupled with a 20% conditional power threshold for futility. The trade-off is that more aggressive boundaries tend to stop trials earlier, but incur a loss of power. Interestingly, the Bayesian efficacy boundary with 99% probability threshold is very similar to the classic Pocock efficacy boundary.</p><p><strong>Conclusions: </strong>In a pandemic where quickly weeding out ineffective treatments and identifying effective treatments is paramount, aggressive monitoring may be preferred to conservative approaches, such as the O'Brien-Fleming boundary. This can be accomplished with either Bayesian or frequentist methods.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"701-709"},"PeriodicalIF":2.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11530333/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140955509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-08-15DOI: 10.1177/17407745241266155
Isabel M Astrachan, James Flory, Scott Yh Kim
Pragmatic clinical trials of standard-of-care interventions compare the relative merits of medical treatments already in use. Traditional research informed consent processes pose significant obstacles to these trials, raising the question of whether they may be conducted with alteration or waiver of informed consent. However, to even be eligible, such a trial in the United States must have no more than minimal research risk. We argue that standard-of-care pragmatic clinical trials can be designed to ensure that they are minimal research risk if the random assignment of an intervention in a pragmatic clinical trial can accommodate individualized, clinically motivated decision-making for each participant. Such a design will ensure that the patient-participants are not exposed to any risks beyond the clinical risks of the interventions, and thus, the trial will have minimal research risk. We explain the logic of this view by comparing three scenarios of standard-of-care pragmatic clinical trials: one with informed consent, one without informed consent, and one recently proposed design called Decision Architecture Randomization Trial. We then conclude by briefly showing that our proposal suggests a natural way to determine when to use an alteration versus a waiver of informed consent.
{"title":"Individualized clinical decisions within standard-of-care pragmatic clinical trials: Implications for consent.","authors":"Isabel M Astrachan, James Flory, Scott Yh Kim","doi":"10.1177/17407745241266155","DOIUrl":"10.1177/17407745241266155","url":null,"abstract":"<p><p>Pragmatic clinical trials of standard-of-care interventions compare the relative merits of medical treatments already in use. Traditional research informed consent processes pose significant obstacles to these trials, raising the question of whether they may be conducted with alteration or waiver of informed consent. However, to even be eligible, such a trial in the United States must have no more than minimal research risk. We argue that standard-of-care pragmatic clinical trials can be designed to ensure that they are minimal research risk if the random assignment of an intervention in a pragmatic clinical trial can accommodate individualized, clinically motivated decision-making for each participant. Such a design will ensure that the patient-participants are not exposed to any risks beyond the clinical risks of the interventions, and thus, the trial will have minimal research risk. We explain the logic of this view by comparing three scenarios of standard-of-care pragmatic clinical trials: one with informed consent, one without informed consent, and one recently proposed design called Decision Architecture Randomization Trial. We then conclude by briefly showing that our proposal suggests a natural way to determine when to use an alteration versus a waiver of informed consent.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"659-665"},"PeriodicalIF":2.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11530319/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141987544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}