量化注意力、执行功能和内隐学习中效应大小估计的误差。

IF 2.2 2区 心理学 Q2 PSYCHOLOGY Journal of Experimental Psychology-Learning Memory and Cognition Pub Date : 2024-07-18 DOI:10.1037/xlm0001338
Kelly G Garner, Christopher R Nolan, Abbey Nydam, Zoie Nott, Howard Bowman, Paul E Dux
{"title":"量化注意力、执行功能和内隐学习中效应大小估计的误差。","authors":"Kelly G Garner, Christopher R Nolan, Abbey Nydam, Zoie Nott, Howard Bowman, Paul E Dux","doi":"10.1037/xlm0001338","DOIUrl":null,"url":null,"abstract":"<p><p>Accurate quantification of effect sizes has the power to motivate theory and reduce misinvestment of scientific resources by informing power calculations during study planning. However, a combination of publication bias and small sample sizes (∼<i>N</i> = 25) hampers certainty in current effect size estimates. We sought to determine the extent to which sample sizes may produce errors in effect size estimates for four commonly used paradigms assessing attention, executive function, and implicit learning (attentional blink, multitasking, contextual cueing, and serial response task). We combined a large data set with a bootstrapping approach to simulate 1,000 experiments across a range of N (13-313). Beyond quantifying the effect size and statistical power that can be anticipated for each study design, we demonstrate that experiments with lower N may double or triple information loss. We also show that basing power calculations on effect sizes from similar studies yields a problematically imprecise estimate between 40% and 67% of the time, given commonly used sample sizes. Last, we show that skewness of intersubject behavioral effects may serve as a predictor of an erroneous estimate. We conclude with practical recommendations for researchers and demonstrate how our simulation approach can yield theoretical insights that are not readily achieved by other methods such as identifying the information gained from rejecting the null hypothesis and quantifying the contribution of individual variation to error in effect size estimates. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":50194,"journal":{"name":"Journal of Experimental Psychology-Learning Memory and Cognition","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Quantifying error in effect size estimates in attention, executive function, and implicit learning.\",\"authors\":\"Kelly G Garner, Christopher R Nolan, Abbey Nydam, Zoie Nott, Howard Bowman, Paul E Dux\",\"doi\":\"10.1037/xlm0001338\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Accurate quantification of effect sizes has the power to motivate theory and reduce misinvestment of scientific resources by informing power calculations during study planning. However, a combination of publication bias and small sample sizes (∼<i>N</i> = 25) hampers certainty in current effect size estimates. We sought to determine the extent to which sample sizes may produce errors in effect size estimates for four commonly used paradigms assessing attention, executive function, and implicit learning (attentional blink, multitasking, contextual cueing, and serial response task). We combined a large data set with a bootstrapping approach to simulate 1,000 experiments across a range of N (13-313). Beyond quantifying the effect size and statistical power that can be anticipated for each study design, we demonstrate that experiments with lower N may double or triple information loss. We also show that basing power calculations on effect sizes from similar studies yields a problematically imprecise estimate between 40% and 67% of the time, given commonly used sample sizes. Last, we show that skewness of intersubject behavioral effects may serve as a predictor of an erroneous estimate. We conclude with practical recommendations for researchers and demonstrate how our simulation approach can yield theoretical insights that are not readily achieved by other methods such as identifying the information gained from rejecting the null hypothesis and quantifying the contribution of individual variation to error in effect size estimates. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>\",\"PeriodicalId\":50194,\"journal\":{\"name\":\"Journal of Experimental Psychology-Learning Memory and Cognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Experimental Psychology-Learning Memory and Cognition\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1037/xlm0001338\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental Psychology-Learning Memory and Cognition","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/xlm0001338","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

效应大小的精确量化可以为研究规划中的功率计算提供信息,从而推动理论研究,减少科学资源的错误投资。然而,发表偏倚和小样本量(∼N = 25)共同影响了目前效应大小估计的确定性。我们试图确定样本量在多大程度上会对评估注意力、执行功能和内隐学习的四种常用范式(注意力眨眼、多任务处理、情境提示和连续反应任务)的效应大小估计产生误差。我们将大型数据集与自引导方法相结合,模拟了 1,000 个 N(13-313)范围内的实验。除了量化每种研究设计可预期的效应大小和统计功率外,我们还证明了较低 N 的实验可能会使信息损失增加一倍或两倍。我们还表明,根据类似研究的效应大小来计算统计能力,在常用样本量的情况下,40% 到 67% 的时间会得出不精确的估计值,这是有问题的。最后,我们还表明,受试者间行为效应的偏度可以预测错误的估计值。最后,我们为研究人员提供了实用建议,并展示了我们的模拟方法如何产生其他方法难以达到的理论洞察力,例如识别拒绝零假设所获得的信息,以及量化个体差异对效应大小估计误差的贡献。(PsycInfo Database Record (c) 2024 APA, 版权所有)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Quantifying error in effect size estimates in attention, executive function, and implicit learning.

Accurate quantification of effect sizes has the power to motivate theory and reduce misinvestment of scientific resources by informing power calculations during study planning. However, a combination of publication bias and small sample sizes (∼N = 25) hampers certainty in current effect size estimates. We sought to determine the extent to which sample sizes may produce errors in effect size estimates for four commonly used paradigms assessing attention, executive function, and implicit learning (attentional blink, multitasking, contextual cueing, and serial response task). We combined a large data set with a bootstrapping approach to simulate 1,000 experiments across a range of N (13-313). Beyond quantifying the effect size and statistical power that can be anticipated for each study design, we demonstrate that experiments with lower N may double or triple information loss. We also show that basing power calculations on effect sizes from similar studies yields a problematically imprecise estimate between 40% and 67% of the time, given commonly used sample sizes. Last, we show that skewness of intersubject behavioral effects may serve as a predictor of an erroneous estimate. We conclude with practical recommendations for researchers and demonstrate how our simulation approach can yield theoretical insights that are not readily achieved by other methods such as identifying the information gained from rejecting the null hypothesis and quantifying the contribution of individual variation to error in effect size estimates. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.30
自引率
3.80%
发文量
163
审稿时长
4-8 weeks
期刊介绍: The Journal of Experimental Psychology: Learning, Memory, and Cognition publishes studies on perception, control of action, perceptual aspects of language processing, and related cognitive processes.
期刊最新文献
A neural index reflecting the amount of cognitive resources available during memory encoding: A model-based approach. Perceiving the "smallest" or "largest" multidigit number: A novel numeric-scale end effect. The influence of complete and partial shared translation in the first language on semantic processing in the second language. Word concreteness modulates bilingual language control during reading comprehension. You sound like an evil young man: A distributional semantic analysis of systematic form-meaning associations for polarity, gender, and age in fictional characters' names.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1