The impact of feedback training on prediction of cancer clinical trial results.

IF 2.2 3区 医学 Q3 MEDICINE, RESEARCH & EXPERIMENTAL Clinical Trials Pub Date : 2024-04-01 Epub Date: 2023-10-24 DOI:10.1177/17407745231203375
Adélaïde Doussau, Patrick Kane, Jeffrey Peppercorn, Aden C Feustel, Sylviya Ganeshamoorthy, Natasha Kekre, Daniel M Benjamin, Jonathan Kimmelman
{"title":"The impact of feedback training on prediction of cancer clinical trial results.","authors":"Adélaïde Doussau, Patrick Kane, Jeffrey Peppercorn, Aden C Feustel, Sylviya Ganeshamoorthy, Natasha Kekre, Daniel M Benjamin, Jonathan Kimmelman","doi":"10.1177/17407745231203375","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Funders must make difficult decisions about which squared treatments to prioritize for randomized trials. Earlier research suggests that experts have no ability to predict which treatments will vindicate their promise. We tested whether a brief training module could improve experts' trial predictions.</p><p><strong>Methods: </strong>We randomized a sample of breast cancer and hematology-oncology experts to the presence or absence of a feedback training module where experts predicted outcomes for five recently completed randomized controlled trials and received feedback on accuracy. Experts then predicted primary outcome attainment for a sample of ongoing randomized controlled trials. Prediction skill was assessed by Brier scores, which measure the average deviation between their predictions and actual outcomes. Secondary outcomes were discrimination (ability to distinguish between positive and non-positive trials) and calibration (higher predictions reflecting higher probability of trials being positive).</p><p><strong>Results: </strong>A total of 148 experts (46 for breast cancer, 54 for leukemia, and 48 for lymphoma) were randomized between May and December 2017 and included in the analysis (1217 forecasts for 25 trials). Feedback did not improve prediction skill (mean Brier score for control: 0.22, 95% confidence interval = 0.20-0.24 vs feedback arm: 0.21, 95% confidence interval = 0.20-0.23; p = 0.51). Control and feedback arms showed similar discrimination (area under the curve = 0.70 vs 0.73, p = 0.24) and calibration (calibration index = 0.01 vs 0.01, p = 0.81). However, experts in both arms offered predictions that were significantly more accurate than uninformative forecasts of 50% (Brier score = 0.25).</p><p><strong>Discussion: </strong>A short training module did not improve predictions for cancer trial results. However, expert communities showed unexpected ability to anticipate positive trials.Pre-registration record: https://aspredicted.org/4ka6r.pdf.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"143-151"},"PeriodicalIF":2.2000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11005298/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Trials","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/17407745231203375","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/10/24 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Funders must make difficult decisions about which squared treatments to prioritize for randomized trials. Earlier research suggests that experts have no ability to predict which treatments will vindicate their promise. We tested whether a brief training module could improve experts' trial predictions.

Methods: We randomized a sample of breast cancer and hematology-oncology experts to the presence or absence of a feedback training module where experts predicted outcomes for five recently completed randomized controlled trials and received feedback on accuracy. Experts then predicted primary outcome attainment for a sample of ongoing randomized controlled trials. Prediction skill was assessed by Brier scores, which measure the average deviation between their predictions and actual outcomes. Secondary outcomes were discrimination (ability to distinguish between positive and non-positive trials) and calibration (higher predictions reflecting higher probability of trials being positive).

Results: A total of 148 experts (46 for breast cancer, 54 for leukemia, and 48 for lymphoma) were randomized between May and December 2017 and included in the analysis (1217 forecasts for 25 trials). Feedback did not improve prediction skill (mean Brier score for control: 0.22, 95% confidence interval = 0.20-0.24 vs feedback arm: 0.21, 95% confidence interval = 0.20-0.23; p = 0.51). Control and feedback arms showed similar discrimination (area under the curve = 0.70 vs 0.73, p = 0.24) and calibration (calibration index = 0.01 vs 0.01, p = 0.81). However, experts in both arms offered predictions that were significantly more accurate than uninformative forecasts of 50% (Brier score = 0.25).

Discussion: A short training module did not improve predictions for cancer trial results. However, expert communities showed unexpected ability to anticipate positive trials.Pre-registration record: https://aspredicted.org/4ka6r.pdf.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
反馈训练对癌症临床试验结果预测的影响。
引言:资助者必须做出艰难的决定,决定哪些平方治疗是随机试验的优先事项。早期的研究表明,专家们没有能力预测哪些治疗方法会证明他们的承诺是正确的。我们测试了一个简短的培训模块是否可以改善专家的试验预测。方法:我们将癌症和血液肿瘤学专家的样本随机分配到有无反馈训练模块中,在该模块中,专家预测了最近完成的五项随机对照试验的结果,并收到了准确性反馈。然后,专家们预测了正在进行的随机对照试验样本的主要结果。预测技能通过Brier评分进行评估,Brier评分衡量他们的预测与实际结果之间的平均偏差。次要结果是辨别(区分阳性和非阳性试验的能力)和校准(较高的预测反映试验呈阳性的可能性较高)。结果:2017年5月至12月,共有148名专家(46名癌症专家、54名白血病专家和48名淋巴瘤专家)被随机分组,并纳入分析(1217名预测25项试验)。反馈并没有提高预测技巧(对照组的平均Brier评分:0.22,95%置信区间 = 0.20-0.24 vs反馈臂:0.21,95%置信区间 = 0.20-0.23;p = 0.51)。控制臂和反馈臂显示出相似的辨别力(曲线下面积 = 0.70对0.73,p = 0.24)和校准(校准指数 = 0.01与0.01,p = 0.81)。然而,双方专家的预测都比50%的无信息预测准确得多(Brier评分 = 0.25)讨论:简短的训练模块并没有改善对癌症试验结果的预测。然而,专家群体表现出了意想不到的预测阳性试验的能力。预注册记录:https://aspredicted.org/4ka6r.pdf.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Clinical Trials
Clinical Trials 医学-医学:研究与实验
CiteScore
4.10
自引率
3.70%
发文量
82
审稿时长
6-12 weeks
期刊介绍: Clinical Trials is dedicated to advancing knowledge on the design and conduct of clinical trials related research methodologies. Covering the design, conduct, analysis, synthesis and evaluation of key methodologies, the journal remains on the cusp of the latest topics, including ethics, regulation and policy impact.
期刊最新文献
Challenges in conducting efficacy trials for new COVID-19 vaccines in developed countries. Society for Clinical Trials Data Monitoring Committee initiative website: Closing the gap. A comparison of computational algorithms for the Bayesian analysis of clinical trials. Comparison of Bayesian and frequentist monitoring boundaries motivated by the Multiplatform Randomized Clinical Trial. Efficient designs for three-sequence stepped wedge trials with continuous recruitment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1