Predicting the replicability of social and behavioural science claims in COVID-19 preprints

IF 21.4 1区 心理学 Q1 MULTIDISCIPLINARY SCIENCES Nature Human Behaviour Pub Date : 2024-12-20 DOI:10.1038/s41562-024-01961-1
Alexandru Marcoci, David P. Wilkinson, Ans Vercammen, Bonnie C. Wintle, Anna Lou Abatayo, Ernest Baskin, Henk Berkman, Erin M. Buchanan, Sara Capitán, Tabaré Capitán, Ginny Chan, Kent Jason G. Cheng, Tom Coupé, Sarah Dryhurst, Jianhua Duan, John E. Edlund, Timothy M. Errington, Anna Fedor, Fiona Fidler, James G. Field, Nicholas Fox, Hannah Fraser, Alexandra L. J. Freeman, Anca Hanea, Felix Holzmeister, Sanghyun Hong, Raquel Huggins, Nick Huntington-Klein, Magnus Johannesson, Angela M. Jones, Hansika Kapoor, John Kerr, Melissa Kline Struhl, Marta Kołczyńska, Yang Liu, Zachary Loomas, Brianna Luis, Esteban Méndez, Olivia Miske, Fallon Mody, Carolin Nast, Brian A. Nosek, E. Simon Parsons, Thomas Pfeiffer, W. Robert Reed, Jon Roozenbeek, Alexa R. Schlyfestone, Claudia R. Schneider, Andrew Soh, Zhongchen Song, Anirudh Tagat, Melba Tutor, Andrew H. Tyner, Karolina Urbanska, Sander van der Linden
{"title":"Predicting the replicability of social and behavioural science claims in COVID-19 preprints","authors":"Alexandru Marcoci, David P. Wilkinson, Ans Vercammen, Bonnie C. Wintle, Anna Lou Abatayo, Ernest Baskin, Henk Berkman, Erin M. Buchanan, Sara Capitán, Tabaré Capitán, Ginny Chan, Kent Jason G. Cheng, Tom Coupé, Sarah Dryhurst, Jianhua Duan, John E. Edlund, Timothy M. Errington, Anna Fedor, Fiona Fidler, James G. Field, Nicholas Fox, Hannah Fraser, Alexandra L. J. Freeman, Anca Hanea, Felix Holzmeister, Sanghyun Hong, Raquel Huggins, Nick Huntington-Klein, Magnus Johannesson, Angela M. Jones, Hansika Kapoor, John Kerr, Melissa Kline Struhl, Marta Kołczyńska, Yang Liu, Zachary Loomas, Brianna Luis, Esteban Méndez, Olivia Miske, Fallon Mody, Carolin Nast, Brian A. Nosek, E. Simon Parsons, Thomas Pfeiffer, W. Robert Reed, Jon Roozenbeek, Alexa R. Schlyfestone, Claudia R. Schneider, Andrew Soh, Zhongchen Song, Anirudh Tagat, Melba Tutor, Andrew H. Tyner, Karolina Urbanska, Sander van der Linden","doi":"10.1038/s41562-024-01961-1","DOIUrl":null,"url":null,"abstract":"<p>Replications are important for assessing the reliability of published findings. However, they are costly, and it is infeasible to replicate everything. Accurate, fast, lower-cost alternatives such as eliciting predictions could accelerate assessment for rapid policy implementation in a crisis and help guide a more efficient allocation of scarce replication resources. We elicited judgements from participants on 100 claims from preprints about an emerging area of research (COVID-19 pandemic) using an interactive structured elicitation protocol, and we conducted 29 new high-powered replications. After interacting with their peers, participant groups with lower task expertise (‘beginners’) updated their estimates and confidence in their judgements significantly more than groups with greater task expertise (‘experienced’). For experienced individuals, the average accuracy was 0.57 (95% CI: [0.53, 0.61]) after interaction, and they correctly classified 61% of claims; beginners’ average accuracy was 0.58 (95% CI: [0.54, 0.62]), correctly classifying 69% of claims. The difference in accuracy between groups was not statistically significant and their judgements on the full set of claims were correlated (<i>r</i>(98) = 0.48, <i>P</i> &lt; 0.001). These results suggest that both beginners and more-experienced participants using a structured process have some ability to make better-than-chance predictions about the reliability of ‘fast science’ under conditions of high uncertainty. However, given the importance of such assessments for making evidence-based critical decisions in a crisis, more research is required to understand who the right experts in forecasting replicability are and how their judgements ought to be elicited.</p>","PeriodicalId":19074,"journal":{"name":"Nature Human Behaviour","volume":"22 1","pages":""},"PeriodicalIF":21.4000,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Human Behaviour","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1038/s41562-024-01961-1","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Replications are important for assessing the reliability of published findings. However, they are costly, and it is infeasible to replicate everything. Accurate, fast, lower-cost alternatives such as eliciting predictions could accelerate assessment for rapid policy implementation in a crisis and help guide a more efficient allocation of scarce replication resources. We elicited judgements from participants on 100 claims from preprints about an emerging area of research (COVID-19 pandemic) using an interactive structured elicitation protocol, and we conducted 29 new high-powered replications. After interacting with their peers, participant groups with lower task expertise (‘beginners’) updated their estimates and confidence in their judgements significantly more than groups with greater task expertise (‘experienced’). For experienced individuals, the average accuracy was 0.57 (95% CI: [0.53, 0.61]) after interaction, and they correctly classified 61% of claims; beginners’ average accuracy was 0.58 (95% CI: [0.54, 0.62]), correctly classifying 69% of claims. The difference in accuracy between groups was not statistically significant and their judgements on the full set of claims were correlated (r(98) = 0.48, P < 0.001). These results suggest that both beginners and more-experienced participants using a structured process have some ability to make better-than-chance predictions about the reliability of ‘fast science’ under conditions of high uncertainty. However, given the importance of such assessments for making evidence-based critical decisions in a crisis, more research is required to understand who the right experts in forecasting replicability are and how their judgements ought to be elicited.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
预测COVID-19预印本中社会和行为科学主张的可复制性
重复对评估已发表研究结果的可靠性很重要。然而,它们是昂贵的,而且复制一切是不可行的。准确、快速、低成本的替代方案,如诱导预测,可以加快危机中快速政策实施的评估,并有助于指导更有效地分配稀缺的复制资源。我们使用交互式结构化引出方案从参与者中引出关于新兴研究领域(COVID-19大流行)的预印本中的100条主张的判断,并进行了29次新的高强度重复。在与同伴互动后,任务专长较低的参与者群体(“初学者”)比任务专长较高的参与者群体(“有经验的”)更能更新他们的估计和对自己判断的信心。对于有经验的个体,交互后的平均准确率为0.57 (95% CI:[0.53, 0.61]),他们正确分类了61%的索赔;初学者的平均准确率为0.58 (95% CI:[0.54, 0.62]),正确分类了69%的索赔。两组之间的准确性差异无统计学意义,他们对全套索赔的判断是相关的(r(98) = 0.48, P < 0.001)。这些结果表明,使用结构化过程的初学者和更有经验的参与者都有能力在高度不确定性的条件下对“快速科学”的可靠性做出比偶然更好的预测。然而,考虑到这种评估对于在危机中做出基于证据的关键决策的重要性,需要更多的研究来了解谁是预测可复制性的合适专家,以及应该如何得出他们的判断。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Nature Human Behaviour
Nature Human Behaviour Psychology-Social Psychology
CiteScore
36.80
自引率
1.00%
发文量
227
期刊介绍: Nature Human Behaviour is a journal that focuses on publishing research of outstanding significance into any aspect of human behavior.The research can cover various areas such as psychological, biological, and social bases of human behavior.It also includes the study of origins, development, and disorders related to human behavior.The primary aim of the journal is to increase the visibility of research in the field and enhance its societal reach and impact.
期刊最新文献
Combating China’s retraction crisis First farmers of Central Europe do not show family-related inequality Trust in scientists and their role in society across 68 countries Healthcare use in 12–18-year-old adolescents vaccinated against SARS-CoV-2 versus unvaccinated in a national register-based Danish cohort Do not blame ‘queen bees’ for gender inequality in academia
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1