{"title":"Broken Effects? How to Reduce False Positives in Panel Regressions","authors":"Xina Li, Phebo D. Wibbens","doi":"10.1287/stsc.2022.0172","DOIUrl":null,"url":null,"abstract":"Many published papers in the management field have used statistical methods that, according to the latest insights in econometrics, can lead to elevated rates of false positives: results that appear “significant,” whereas they are not. The question is how problematic these less robust econometric analyses are in practice for management research. This paper presents simulations and an empirical replication to investigate two widespread but now largely discredited practices in panel data analysis: nonclustered standard errors and random effects (RE). The simulations indicate that these two practices can lead to strongly elevated rates of false positives in typical empirical settings studied in management research. The often-advocated Hausman test does not always prevent false positives in RE regressions. Replication of a published regression that used RE and classic standard errors yields that many of the coefficients reported as significant in the original analysis become insignificant when using fixed effects and clustered standard errors, on a slightly different sample. Based on the findings in this paper, published results using nonclustered standard errors or RE estimates for panel data should be interpreted with great care, because the probability that they are false positives can be much larger than reported. Going forward, empirical researchers should cluster standard errors to account for serial correlation and use fixed rather than random effects to account for unobserved heterogeneity. Funding: X. Li received financial support from the Ian Potter ’93D PhD Award. Supplemental Material: The online appendix is available at https://doi.org/10.1287/stsc.2022.0172 .","PeriodicalId":45295,"journal":{"name":"Strategy Science","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Strategy Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1287/stsc.2022.0172","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 1
Abstract
Many published papers in the management field have used statistical methods that, according to the latest insights in econometrics, can lead to elevated rates of false positives: results that appear “significant,” whereas they are not. The question is how problematic these less robust econometric analyses are in practice for management research. This paper presents simulations and an empirical replication to investigate two widespread but now largely discredited practices in panel data analysis: nonclustered standard errors and random effects (RE). The simulations indicate that these two practices can lead to strongly elevated rates of false positives in typical empirical settings studied in management research. The often-advocated Hausman test does not always prevent false positives in RE regressions. Replication of a published regression that used RE and classic standard errors yields that many of the coefficients reported as significant in the original analysis become insignificant when using fixed effects and clustered standard errors, on a slightly different sample. Based on the findings in this paper, published results using nonclustered standard errors or RE estimates for panel data should be interpreted with great care, because the probability that they are false positives can be much larger than reported. Going forward, empirical researchers should cluster standard errors to account for serial correlation and use fixed rather than random effects to account for unobserved heterogeneity. Funding: X. Li received financial support from the Ian Potter ’93D PhD Award. Supplemental Material: The online appendix is available at https://doi.org/10.1287/stsc.2022.0172 .
根据计量经济学的最新见解,管理领域的许多已发表论文都使用了统计方法,这些方法可能导致误报率升高:结果看起来“显著”,但实际上并非如此。问题是,在管理研究的实践中,这些不那么稳健的计量经济学分析有多大问题。本文提出了模拟和经验复制,以调查面板数据分析中两种广泛但现在基本上不可信的做法:非聚类标准误差和随机效应(RE)。模拟表明,在管理研究中研究的典型经验设置中,这两种做法可能导致误报率大幅上升。经常提倡的Hausman检验并不总能防止RE回归中的假阳性。使用正则化和经典标准误差的已发表回归的复制结果表明,当在稍微不同的样本上使用固定效应和聚类标准误差时,许多在原始分析中报告为显著的系数变得不显著。根据本文的发现,使用非聚类标准误差或对面板数据进行RE估计的已发表结果应该非常小心地解释,因为它们是假阳性的概率可能比报告的要大得多。展望未来,实证研究人员应该聚类标准误差来解释序列相关性,并使用固定效应而不是随机效应来解释未观察到的异质性。基金资助:李x获得Ian Potter ' 93D博士奖资助。补充材料:在线附录可在https://doi.org/10.1287/stsc.2022.0172上获得。