Kelly G Garner, Christopher R Nolan, Abbey Nydam, Zoie Nott, Howard Bowman, Paul E Dux
{"title":"Quantifying error in effect size estimates in attention, executive function, and implicit learning.","authors":"Kelly G Garner, Christopher R Nolan, Abbey Nydam, Zoie Nott, Howard Bowman, Paul E Dux","doi":"10.1037/xlm0001338","DOIUrl":null,"url":null,"abstract":"<p><p>Accurate quantification of effect sizes has the power to motivate theory and reduce misinvestment of scientific resources by informing power calculations during study planning. However, a combination of publication bias and small sample sizes (∼<i>N</i> = 25) hampers certainty in current effect size estimates. We sought to determine the extent to which sample sizes may produce errors in effect size estimates for four commonly used paradigms assessing attention, executive function, and implicit learning (attentional blink, multitasking, contextual cueing, and serial response task). We combined a large data set with a bootstrapping approach to simulate 1,000 experiments across a range of N (13-313). Beyond quantifying the effect size and statistical power that can be anticipated for each study design, we demonstrate that experiments with lower N may double or triple information loss. We also show that basing power calculations on effect sizes from similar studies yields a problematically imprecise estimate between 40% and 67% of the time, given commonly used sample sizes. Last, we show that skewness of intersubject behavioral effects may serve as a predictor of an erroneous estimate. We conclude with practical recommendations for researchers and demonstrate how our simulation approach can yield theoretical insights that are not readily achieved by other methods such as identifying the information gained from rejecting the null hypothesis and quantifying the contribution of individual variation to error in effect size estimates. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":50194,"journal":{"name":"Journal of Experimental Psychology-Learning Memory and Cognition","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental Psychology-Learning Memory and Cognition","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/xlm0001338","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Accurate quantification of effect sizes has the power to motivate theory and reduce misinvestment of scientific resources by informing power calculations during study planning. However, a combination of publication bias and small sample sizes (∼N = 25) hampers certainty in current effect size estimates. We sought to determine the extent to which sample sizes may produce errors in effect size estimates for four commonly used paradigms assessing attention, executive function, and implicit learning (attentional blink, multitasking, contextual cueing, and serial response task). We combined a large data set with a bootstrapping approach to simulate 1,000 experiments across a range of N (13-313). Beyond quantifying the effect size and statistical power that can be anticipated for each study design, we demonstrate that experiments with lower N may double or triple information loss. We also show that basing power calculations on effect sizes from similar studies yields a problematically imprecise estimate between 40% and 67% of the time, given commonly used sample sizes. Last, we show that skewness of intersubject behavioral effects may serve as a predictor of an erroneous estimate. We conclude with practical recommendations for researchers and demonstrate how our simulation approach can yield theoretical insights that are not readily achieved by other methods such as identifying the information gained from rejecting the null hypothesis and quantifying the contribution of individual variation to error in effect size estimates. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
期刊介绍:
The Journal of Experimental Psychology: Learning, Memory, and Cognition publishes studies on perception, control of action, perceptual aspects of language processing, and related cognitive processes.