Lara Vankelecom, Ole Schacht, Nathan Laroy, Tom Loeys, Beatrijs Moerkerke
{"title":"心理学研究中权力分析实践演变的系统回顾。","authors":"Lara Vankelecom, Ole Schacht, Nathan Laroy, Tom Loeys, Beatrijs Moerkerke","doi":"10.5334/pb.1318","DOIUrl":null,"url":null,"abstract":"<p><p>Performing hypothesis tests with adequate statistical power is indispensable for psychological research. In response to several large-scale replication projects following the replication crisis, concerns about the root causes of this crisis - such as questionable research practices (QRPs) - have grown. While initial efforts primarily addressed the inflation of the type I error rate of research due to QRPs, recent attention has shifted to the adverse consequences of low statistical power. In this paper we first argue how underpowered studies, in combination with publication bias, contribute to a literature rife with false positive results and overestimated effect sizes. We then examine whether the prevalence of power analyses in psychological research has effectively increased over time in response to the increased awareness regarding these phenomena. To address this, we conducted a systematic review of 903 published empirical articles across four APA-disciplines, comparing 453 papers published in 2015-2016, with 450 papers from 2020-2021. Although the prevalence of power analysis across different domains in psychology has increased over time (from 9.5% to 30%), it remains insufficient overall. We conclude by discussing the implications of these findings and elaborating on some alternative methods to <i>a priori</i> power analysis that can help ensure sufficient statistical power.</p>","PeriodicalId":46662,"journal":{"name":"Psychologica Belgica","volume":"65 1","pages":"17-37"},"PeriodicalIF":2.7000,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11720577/pdf/","citationCount":"0","resultStr":"{\"title\":\"A Systematic Review on the Evolution of Power Analysis Practices in Psychological Research.\",\"authors\":\"Lara Vankelecom, Ole Schacht, Nathan Laroy, Tom Loeys, Beatrijs Moerkerke\",\"doi\":\"10.5334/pb.1318\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Performing hypothesis tests with adequate statistical power is indispensable for psychological research. In response to several large-scale replication projects following the replication crisis, concerns about the root causes of this crisis - such as questionable research practices (QRPs) - have grown. While initial efforts primarily addressed the inflation of the type I error rate of research due to QRPs, recent attention has shifted to the adverse consequences of low statistical power. In this paper we first argue how underpowered studies, in combination with publication bias, contribute to a literature rife with false positive results and overestimated effect sizes. We then examine whether the prevalence of power analyses in psychological research has effectively increased over time in response to the increased awareness regarding these phenomena. To address this, we conducted a systematic review of 903 published empirical articles across four APA-disciplines, comparing 453 papers published in 2015-2016, with 450 papers from 2020-2021. Although the prevalence of power analysis across different domains in psychology has increased over time (from 9.5% to 30%), it remains insufficient overall. We conclude by discussing the implications of these findings and elaborating on some alternative methods to <i>a priori</i> power analysis that can help ensure sufficient statistical power.</p>\",\"PeriodicalId\":46662,\"journal\":{\"name\":\"Psychologica Belgica\",\"volume\":\"65 1\",\"pages\":\"17-37\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11720577/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Psychologica Belgica\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.5334/pb.1318\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychologica Belgica","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.5334/pb.1318","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
A Systematic Review on the Evolution of Power Analysis Practices in Psychological Research.
Performing hypothesis tests with adequate statistical power is indispensable for psychological research. In response to several large-scale replication projects following the replication crisis, concerns about the root causes of this crisis - such as questionable research practices (QRPs) - have grown. While initial efforts primarily addressed the inflation of the type I error rate of research due to QRPs, recent attention has shifted to the adverse consequences of low statistical power. In this paper we first argue how underpowered studies, in combination with publication bias, contribute to a literature rife with false positive results and overestimated effect sizes. We then examine whether the prevalence of power analyses in psychological research has effectively increased over time in response to the increased awareness regarding these phenomena. To address this, we conducted a systematic review of 903 published empirical articles across four APA-disciplines, comparing 453 papers published in 2015-2016, with 450 papers from 2020-2021. Although the prevalence of power analysis across different domains in psychology has increased over time (from 9.5% to 30%), it remains insufficient overall. We conclude by discussing the implications of these findings and elaborating on some alternative methods to a priori power analysis that can help ensure sufficient statistical power.