Pub Date : 2021-01-01DOI: 10.1177/2515245920951503
D. Lakens, Aaron R. Caldwell
Researchers often rely on analysis of variance (ANOVA) when they report results of experiments. To ensure that a study is adequately powered to yield informative results with an ANOVA, researchers can perform an a priori power analysis. However, power analysis for factorial ANOVA designs is often a challenge. Current software solutions do not allow power analyses for complex designs with several within-participants factors. Moreover, power analyses often need η p 2 or Cohen’s f as input, but these effect sizes are not intuitive and do not generalize to different experimental designs. We have created the R package Superpower and online Shiny apps to enable researchers without extensive programming experience to perform simulation-based power analysis for ANOVA designs of up to three within- or between-participants factors. Predicted effects are entered by specifying means, standard deviations, and, for within-participants factors, the correlations. The simulation provides the statistical power for all ANOVA main effects, interactions, and individual comparisons. The software can plot power across a range of sample sizes, can control for multiple comparisons, and can compute power when the homogeneity or sphericity assumption is violated. This Tutorial demonstrates how to perform a priori power analysis to design informative studies for main effects, interactions, and individual comparisons and highlights important factors that determine the statistical power for factorial ANOVA designs.
研究人员在报告实验结果时经常依靠方差分析(ANOVA)。为了确保一项研究有足够的动力来产生信息丰富的方差分析结果,研究人员可以进行先验的功率分析。然而,因子方差分析设计的功率分析往往是一个挑战。当前的软件解决方案不允许对具有多个内部参与者因素的复杂设计进行功率分析。此外,功率分析通常需要η p 2或Cohen’s f作为输入,但这些效应量并不直观,也不能推广到不同的实验设计中。我们已经创建了R包Superpower和在线Shiny应用程序,使没有丰富编程经验的研究人员能够对多达三个参与者内部或参与者之间因素的ANOVA设计进行基于模拟的功率分析。预测的效果是通过指定平均值、标准偏差和参与者内部因素的相关性来输入的。模拟为所有ANOVA主效应、相互作用和个体比较提供了统计能力。该软件可以绘制各种样本量范围内的功率,可以控制多重比较,并且可以在违反均匀性或球形假设时计算功率。本教程演示了如何执行先验功率分析来设计主要效应、相互作用和个体比较的信息研究,并强调了决定因子方差分析设计的统计功率的重要因素。
{"title":"Simulation-Based Power Analysis for Factorial Analysis of Variance Designs","authors":"D. Lakens, Aaron R. Caldwell","doi":"10.1177/2515245920951503","DOIUrl":"https://doi.org/10.1177/2515245920951503","url":null,"abstract":"Researchers often rely on analysis of variance (ANOVA) when they report results of experiments. To ensure that a study is adequately powered to yield informative results with an ANOVA, researchers can perform an a priori power analysis. However, power analysis for factorial ANOVA designs is often a challenge. Current software solutions do not allow power analyses for complex designs with several within-participants factors. Moreover, power analyses often need η p 2 or Cohen’s f as input, but these effect sizes are not intuitive and do not generalize to different experimental designs. We have created the R package Superpower and online Shiny apps to enable researchers without extensive programming experience to perform simulation-based power analysis for ANOVA designs of up to three within- or between-participants factors. Predicted effects are entered by specifying means, standard deviations, and, for within-participants factors, the correlations. The simulation provides the statistical power for all ANOVA main effects, interactions, and individual comparisons. The software can plot power across a range of sample sizes, can control for multiple comparisons, and can compute power when the homogeneity or sphericity assumption is violated. This Tutorial demonstrates how to perform a priori power analysis to design informative studies for main effects, interactions, and individual comparisons and highlights important factors that determine the statistical power for factorial ANOVA designs.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920951503","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47221773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2515245920985137
Andrew V. Frane
Factorial designs are common in psychology research. But they are nearly always used without control of the experiment-wise Type I error rate (EWER), perhaps because of a lack of awareness about viable procedures for that purpose and perhaps also because of a lack of appreciation for the problem of Type I error inflation. In this article, key concepts relating to Type I error inflation are discussed, with emphasis on the 2 × 2 factorial design. Simulations are used to evaluate various approaches in that context. I show that conventional approaches often do not control the EWER. Alternative approaches are recommended that reliably control the EWER and are simple to implement.
{"title":"Experiment-Wise Type I Error Control: A Focus on 2 × 2 Designs","authors":"Andrew V. Frane","doi":"10.1177/2515245920985137","DOIUrl":"https://doi.org/10.1177/2515245920985137","url":null,"abstract":"Factorial designs are common in psychology research. But they are nearly always used without control of the experiment-wise Type I error rate (EWER), perhaps because of a lack of awareness about viable procedures for that purpose and perhaps also because of a lack of appreciation for the problem of Type I error inflation. In this article, key concepts relating to Type I error inflation are discussed, with emphasis on the 2 × 2 factorial design. Simulations are used to evaluate various approaches in that context. I show that conventional approaches often do not control the EWER. Alternative approaches are recommended that reliably control the EWER and are simple to implement.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920985137","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43793200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2515245920924686
O. Kirtley, G. Lafit, R. Achterhof, Anu P. Hiekkaranta, I. Myin-Germeys
A growing interest in understanding complex and dynamic psychological processes as they occur in everyday life has led to an increase in studies using ambulatory assessment techniques, including the experience-sampling method (ESM) and ecological momentary assessment. These methods, however, tend to involve numerous forking paths and researcher degrees of freedom, even beyond those typically encountered with other research methodologies. Although a number of researchers working with ESM techniques are actively engaged in efforts to increase the methodological rigor and transparency of research that uses them, currently there is little routine implementation of open-science practices in ESM research. In this article, we discuss the ways in which ESM research is especially vulnerable to threats to transparency, reproducibility, and replicability. We propose that greater use of study registration, a cornerstone of open science, may address some of these threats to the transparency of ESM research. Registration of ESM research is not without challenges, including model selection, accounting for potential model-convergence issues, and the use of preexisting data sets. As these may prove to be significant barriers for ESM researchers, we also discuss ways of overcoming these challenges and of documenting them in a registration. A further challenge is that current general preregistration templates do not adequately capture the unique features of ESM. We present a registration template for ESM research and also discuss registration of studies using preexisting data.
{"title":"Making the Black Box Transparent: A Template and Tutorial for Registration of Studies Using Experience-Sampling Methods","authors":"O. Kirtley, G. Lafit, R. Achterhof, Anu P. Hiekkaranta, I. Myin-Germeys","doi":"10.1177/2515245920924686","DOIUrl":"https://doi.org/10.1177/2515245920924686","url":null,"abstract":"A growing interest in understanding complex and dynamic psychological processes as they occur in everyday life has led to an increase in studies using ambulatory assessment techniques, including the experience-sampling method (ESM) and ecological momentary assessment. These methods, however, tend to involve numerous forking paths and researcher degrees of freedom, even beyond those typically encountered with other research methodologies. Although a number of researchers working with ESM techniques are actively engaged in efforts to increase the methodological rigor and transparency of research that uses them, currently there is little routine implementation of open-science practices in ESM research. In this article, we discuss the ways in which ESM research is especially vulnerable to threats to transparency, reproducibility, and replicability. We propose that greater use of study registration, a cornerstone of open science, may address some of these threats to the transparency of ESM research. Registration of ESM research is not without challenges, including model selection, accounting for potential model-convergence issues, and the use of preexisting data sets. As these may prove to be significant barriers for ESM researchers, we also discuss ways of overcoming these challenges and of documenting them in a registration. A further challenge is that current general preregistration templates do not adequately capture the unique features of ESM. We present a registration template for ESM research and also discuss registration of studies using preexisting data.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920924686","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42894342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2515245920985467
Louisa M. Reins, Ruben C. Arslan, Tanja M. Gerlach
In psychological science, ego-centered social networks are assessed to investigate the patterning and development of social relationships. In this approach, a focal individual is typically asked to report the people they interact with in specific contexts and to provide additional information on those interaction partners and the relationships with them. Although ego-centered social networks hold considerable promise for investigating various interesting questions from psychology and beyond, their implementation can be challenging. This tutorial provides researchers with detailed instructions on how to set up a study involving ego-centered social networks online using the open-source software formr. By including a fully functional study template for the assessment of social networks and extensions to this design, we hope to equip researchers from different backgrounds with the tools necessary to collect social-network data tailored to their research needs.
{"title":"Assessing Ego-Centered Social Networks in formr: A Tutorial","authors":"Louisa M. Reins, Ruben C. Arslan, Tanja M. Gerlach","doi":"10.1177/2515245920985467","DOIUrl":"https://doi.org/10.1177/2515245920985467","url":null,"abstract":"In psychological science, ego-centered social networks are assessed to investigate the patterning and development of social relationships. In this approach, a focal individual is typically asked to report the people they interact with in specific contexts and to provide additional information on those interaction partners and the relationships with them. Although ego-centered social networks hold considerable promise for investigating various interesting questions from psychology and beyond, their implementation can be challenging. This tutorial provides researchers with detailed instructions on how to set up a study involving ego-centered social networks online using the open-source software formr. By including a fully functional study template for the assessment of social networks and extensions to this design, we hope to equip researchers from different backgrounds with the tools necessary to collect social-network data tailored to their research needs.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920985467","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46924813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2515245920978738
G. Lafit, J. Adolf, Egon Dejonckheere, I. Myin-Germeys, W. Viechtbauer, E. Ceulemans
In recent years, the popularity of procedures for collecting intensive longitudinal data, such as the experience-sampling method, has increased greatly. The data collected using such designs allow researchers to study the dynamics of psychological functioning and how these dynamics differ across individuals. To this end, the data are often modeled with multilevel regression models. An important question that arises when researchers design intensive longitudinal studies is how to determine the number of participants needed to test specific hypotheses regarding the parameters of these models with sufficient power. Power calculations for intensive longitudinal studies are challenging because of the hierarchical data structure in which repeated observations are nested within the individuals and because of the serial dependence that is typically present in these data. We therefore present a user-friendly application and step-by-step tutorial for performing simulation-based power analyses for a set of models that are popular in intensive longitudinal research. Because many studies use the same sampling protocol (i.e., a fixed number of at least approximately equidistant observations) within individuals, we assume that this protocol is fixed and focus on the number of participants. All included models explicitly account for the temporal dependencies in the data by assuming serially correlated errors or including autoregressive effects.
{"title":"Selection of the Number of Participants in Intensive Longitudinal Studies: A User-Friendly Shiny App and Tutorial for Performing Power Analysis in Multilevel Regression Models That Account for Temporal Dependencies","authors":"G. Lafit, J. Adolf, Egon Dejonckheere, I. Myin-Germeys, W. Viechtbauer, E. Ceulemans","doi":"10.1177/2515245920978738","DOIUrl":"https://doi.org/10.1177/2515245920978738","url":null,"abstract":"In recent years, the popularity of procedures for collecting intensive longitudinal data, such as the experience-sampling method, has increased greatly. The data collected using such designs allow researchers to study the dynamics of psychological functioning and how these dynamics differ across individuals. To this end, the data are often modeled with multilevel regression models. An important question that arises when researchers design intensive longitudinal studies is how to determine the number of participants needed to test specific hypotheses regarding the parameters of these models with sufficient power. Power calculations for intensive longitudinal studies are challenging because of the hierarchical data structure in which repeated observations are nested within the individuals and because of the serial dependence that is typically present in these data. We therefore present a user-friendly application and step-by-step tutorial for performing simulation-based power analyses for a set of models that are popular in intensive longitudinal research. Because many studies use the same sampling protocol (i.e., a fixed number of at least approximately equidistant observations) within individuals, we assume that this protocol is fixed and focus on the number of participants. All included models explicitly account for the temporal dependencies in the data by assuming serially correlated errors or including autoregressive effects.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920978738","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44058028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2515245920965119
L. DeBruine, D. Barr
Experimental designs that sample both subjects and stimuli from a larger population need to account for random effects of both subjects and stimuli using mixed-effects models. However, much of this research is analyzed using analysis of variance on aggregated responses because researchers are not confident specifying and interpreting mixed-effects models. This Tutorial explains how to simulate data with random-effects structure and analyze the data using linear mixed-effects regression (with the lme4 R package), with a focus on interpreting the output in light of the simulated parameters. Data simulation not only can enhance understanding of how these models work, but also enables researchers to perform power calculations for complex designs. All materials associated with this article can be accessed at https://osf.io/3cz2e/.
{"title":"Understanding Mixed-Effects Models Through Data Simulation","authors":"L. DeBruine, D. Barr","doi":"10.1177/2515245920965119","DOIUrl":"https://doi.org/10.1177/2515245920965119","url":null,"abstract":"Experimental designs that sample both subjects and stimuli from a larger population need to account for random effects of both subjects and stimuli using mixed-effects models. However, much of this research is analyzed using analysis of variance on aggregated responses because researchers are not confident specifying and interpreting mixed-effects models. This Tutorial explains how to simulate data with random-effects structure and analyze the data using linear mixed-effects regression (with the lme4 R package), with a focus on interpreting the output in light of the simulated parameters. Data simulation not only can enhance understanding of how these models work, but also enables researchers to perform power calculations for complex designs. All materials associated with this article can be accessed at https://osf.io/3cz2e/.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920965119","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43610689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2515245920947067
Q. Song, Chen Tang, Serena Wee
Model generalizability describes how well the findings from a sample are applicable to other samples in the population. In this Tutorial, we explain model generalizability through the statistical concept of model overfitting and its outcome (i.e., validity shrinkage in new samples), and we use a Shiny app to simulate and visualize how model generalizability is influenced by three factors: model complexity, sample size, and effect size. We then discuss cross-validation as an approach for evaluating model generalizability and provide guidelines for implementing this approach. To help researchers understand how to apply cross-validation to their own research, we walk through an example, accompanied by step-by-step illustrations in R. This Tutorial is expected to help readers develop the basic knowledge and skills to use cross-validation to evaluate model generalizability in their research and practice.
{"title":"Making Sense of Model Generalizability: A Tutorial on Cross-Validation in R and Shiny","authors":"Q. Song, Chen Tang, Serena Wee","doi":"10.1177/2515245920947067","DOIUrl":"https://doi.org/10.1177/2515245920947067","url":null,"abstract":"Model generalizability describes how well the findings from a sample are applicable to other samples in the population. In this Tutorial, we explain model generalizability through the statistical concept of model overfitting and its outcome (i.e., validity shrinkage in new samples), and we use a Shiny app to simulate and visualize how model generalizability is influenced by three factors: model complexity, sample size, and effect size. We then discuss cross-validation as an approach for evaluating model generalizability and provide guidelines for implementing this approach. To help researchers understand how to apply cross-validation to their own research, we walk through an example, accompanied by step-by-step illustrations in R. This Tutorial is expected to help readers develop the basic knowledge and skills to use cross-validation to evaluate model generalizability in their research and practice.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920947067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42360834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-04DOI: 10.1177/25152459211007368
J. Rohrer, Ruben C. Arslan
Psychological theories often invoke interactions but remain vague regarding the details. As a consequence, researchers may not know how to properly test them and may potentially run analyses that reliably return the wrong answer to their research question. We discuss three major issues regarding the prediction and interpretation of interactions. First, interactions can be removable in the sense that they appear or disappear depending on scaling decisions, with consequences for a variety of situations (e.g., binary or categorical outcomes, bounded scales with floor and ceiling effects). Second, interactions may be conceptualized as changes in slope or changes in correlations, and because these two phenomena do not necessarily coincide, researchers might draw wrong conclusions. Third, interactions may or may not be causally identified, and this determines which interpretations are valid. Researchers who remain unaware of these distinctions might accidentally analyze their data in a manner that returns the technically correct answer to the wrong question. We illustrate all issues with examples from psychology and issue recommendations for how to best address them in a productive manner.
{"title":"Precise Answers to Vague Questions: Issues With Interactions","authors":"J. Rohrer, Ruben C. Arslan","doi":"10.1177/25152459211007368","DOIUrl":"https://doi.org/10.1177/25152459211007368","url":null,"abstract":"Psychological theories often invoke interactions but remain vague regarding the details. As a consequence, researchers may not know how to properly test them and may potentially run analyses that reliably return the wrong answer to their research question. We discuss three major issues regarding the prediction and interpretation of interactions. First, interactions can be removable in the sense that they appear or disappear depending on scaling decisions, with consequences for a variety of situations (e.g., binary or categorical outcomes, bounded scales with floor and ceiling effects). Second, interactions may be conceptualized as changes in slope or changes in correlations, and because these two phenomena do not necessarily coincide, researchers might draw wrong conclusions. Third, interactions may or may not be causally identified, and this determines which interpretations are valid. Researchers who remain unaware of these distinctions might accidentally analyze their data in a manner that returns the technically correct answer to the wrong question. We illustrate all issues with examples from psychology and issue recommendations for how to best address them in a productive manner.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/25152459211007368","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44894898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1177/2515245920979282
{"title":"Corrigendum: Evaluating Effect Size in Psychological Research: Sense and Nonsense","authors":"","doi":"10.1177/2515245920979282","DOIUrl":"https://doi.org/10.1177/2515245920979282","url":null,"abstract":"","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"509 - 509"},"PeriodicalIF":13.6,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920979282","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47478415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1177/2515245920952393
J. Flake, E. Fried
In this article, we define questionable measurement practices (QMPs) as decisions researchers make that raise doubts about the validity of the measures, and ultimately the validity of study conclusions. Doubts arise for a host of reasons, including a lack of transparency, ignorance, negligence, or misrepresentation of the evidence. We describe the scope of the problem and focus on how transparency is a part of the solution. A lack of measurement transparency makes it impossible to evaluate potential threats to internal, external, statistical-conclusion, and construct validity. We demonstrate that psychology is plagued by a measurement schmeasurement attitude: QMPs are common, hide a stunning source of researcher degrees of freedom, and pose a serious threat to cumulative psychological science, but are largely ignored. We address these challenges by providing a set of questions that researchers and consumers of scientific research can consider to identify and avoid QMPs. Transparent answers to these measurement questions promote rigorous research, allow for thorough evaluations of a study’s inferences, and are necessary for meaningful replication studies.
{"title":"Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them","authors":"J. Flake, E. Fried","doi":"10.1177/2515245920952393","DOIUrl":"https://doi.org/10.1177/2515245920952393","url":null,"abstract":"In this article, we define questionable measurement practices (QMPs) as decisions researchers make that raise doubts about the validity of the measures, and ultimately the validity of study conclusions. Doubts arise for a host of reasons, including a lack of transparency, ignorance, negligence, or misrepresentation of the evidence. We describe the scope of the problem and focus on how transparency is a part of the solution. A lack of measurement transparency makes it impossible to evaluate potential threats to internal, external, statistical-conclusion, and construct validity. We demonstrate that psychology is plagued by a measurement schmeasurement attitude: QMPs are common, hide a stunning source of researcher degrees of freedom, and pose a serious threat to cumulative psychological science, but are largely ignored. We address these challenges by providing a set of questions that researchers and consumers of scientific research can consider to identify and avoid QMPs. Transparent answers to these measurement questions promote rigorous research, allow for thorough evaluations of a study’s inferences, and are necessary for meaningful replication studies.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"456 - 465"},"PeriodicalIF":13.6,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920952393","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45736675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}