Pub Date : 2020-11-06DOI: 10.1177/2515245920951747
D. Flora
Measurement quality has recently been highlighted as an important concern for advancing a cumulative psychological science. An implication is that researchers should move beyond mechanistically reporting coefficient alpha toward more carefully assessing the internal structure and reliability of multi-item scales. Yet a researcher may be discouraged upon discovering that a prominent alternative to alpha, namely, coefficient omega, can be calculated in a variety of ways. In this Tutorial, I alleviate this potential confusion by describing alternative forms of omega and providing guidelines for choosing an appropriate omega estimate pertaining to the measurement of a target construct represented with a confirmatory factor analysis model. Several applied examples demonstrate how to compute different forms of omega in R.
{"title":"Your Coefficient Alpha Is Probably Wrong, but Which Coefficient Omega Is Right? A Tutorial on Using R to Obtain Better Reliability Estimates","authors":"D. Flora","doi":"10.1177/2515245920951747","DOIUrl":"https://doi.org/10.1177/2515245920951747","url":null,"abstract":"Measurement quality has recently been highlighted as an important concern for advancing a cumulative psychological science. An implication is that researchers should move beyond mechanistically reporting coefficient alpha toward more carefully assessing the internal structure and reliability of multi-item scales. Yet a researcher may be discouraged upon discovering that a prominent alternative to alpha, namely, coefficient omega, can be calculated in a variety of ways. In this Tutorial, I alleviate this potential confusion by describing alternative forms of omega and providing guidelines for choosing an appropriate omega estimate pertaining to the measurement of a target construct represented with a confirmatory factor analysis model. Several applied examples demonstrate how to compute different forms of omega in R.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"484 - 501"},"PeriodicalIF":13.6,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920951747","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46361358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-05DOI: 10.1177/25152459211045930
Márton Kovács, Rink Hoekstra, B. Aczel
Errors are an inevitable consequence of human fallibility, and researchers are no exception. Most researchers can recall major frustrations or serious time delays due to human errors while collecting, analyzing, or reporting data. The present study is an exploration of mistakes made during the data-management process in psychological research. We surveyed 488 researchers regarding the type, frequency, seriousness, and outcome of mistakes that have occurred in their research team during the last 5 years. The majority of respondents suggested that mistakes occurred with very low or low frequency. Most respondents reported that the most frequent mistakes led to insignificant or minor consequences, such as time loss or frustration. The most serious mistakes caused insignificant or minor consequences for about a third of respondents, moderate consequences for almost half of respondents, and major or extreme consequences for about one fifth of respondents. The most frequently reported types of mistakes were ambiguous naming/defining of data, version control error, and wrong data processing/analysis. Most mistakes were reportedly due to poor project preparation or management and/or personal difficulties (physical or cognitive constraints). With these initial exploratory findings, we do not aim to provide a description representative for psychological scientists but, rather, to lay the groundwork for a systematic investigation of human fallibility in research data management and the development of solutions to reduce errors and mitigate their impact.
{"title":"The Role of Human Fallibility in Psychological Research: A Survey of Mistakes in Data Management","authors":"Márton Kovács, Rink Hoekstra, B. Aczel","doi":"10.1177/25152459211045930","DOIUrl":"https://doi.org/10.1177/25152459211045930","url":null,"abstract":"Errors are an inevitable consequence of human fallibility, and researchers are no exception. Most researchers can recall major frustrations or serious time delays due to human errors while collecting, analyzing, or reporting data. The present study is an exploration of mistakes made during the data-management process in psychological research. We surveyed 488 researchers regarding the type, frequency, seriousness, and outcome of mistakes that have occurred in their research team during the last 5 years. The majority of respondents suggested that mistakes occurred with very low or low frequency. Most respondents reported that the most frequent mistakes led to insignificant or minor consequences, such as time loss or frustration. The most serious mistakes caused insignificant or minor consequences for about a third of respondents, moderate consequences for almost half of respondents, and major or extreme consequences for about one fifth of respondents. The most frequently reported types of mistakes were ambiguous naming/defining of data, version control error, and wrong data processing/analysis. Most mistakes were reportedly due to poor project preparation or management and/or personal difficulties (physical or cognitive constraints). With these initial exploratory findings, we do not aim to provide a description representative for psychological scientists but, rather, to lay the groundwork for a systematic investigation of human fallibility in research data management and the development of solutions to reduce errors and mitigate their impact.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2020-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42747473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-15DOI: 10.1177/2515245920957618
Eunike Wetzel, B. Roberts
Hussey and Hughes (2020) analyzed four aspects relevant to the structural validity of a psychological scale (internal consistency, test-retest reliability, factor structure, and measurement invariance) in 15 self-report questionnaires, some of which, such as the Big Five Inventory ( John & Srivastava, 1999) and the Rosenberg Self-Esteem Scale (Rosenberg, 1965), are very popular. In this Commentary, we argue that (a) their claim that measurement issues like these are ignored is incorrect, (b) the models they used to test structural validity do not match the construct space for many of the measures, and (c) their analyses and conclusions regarding measurement invariance were needlessly limited to a dichotomous decision rule. First, we believe it is important to note that we are in agreement with the sentiment behind Hussey and Hughes’s study and the previous work that appeared to inspire it (Flake, Pek, & Hehman, 2017). Measurement issues are seldom the focus of the articles published in the top journals in personality and social psychology, and the quality of the measures used by researchers is not a top priority in evaluating the value of the research. Furthermore, the use of ad hoc measures is common in some fields. Nonetheless, we disagree with the authors’ analyses, interpretations, and conclusions concerning the validity of these 15 specific measures for the three reasons we discuss here.
{"title":"Commentary on Hussey and Hughes (2020): Hidden Invalidity Among 15 Commonly Used Measures in Social and Personality Psychology","authors":"Eunike Wetzel, B. Roberts","doi":"10.1177/2515245920957618","DOIUrl":"https://doi.org/10.1177/2515245920957618","url":null,"abstract":"Hussey and Hughes (2020) analyzed four aspects relevant to the structural validity of a psychological scale (internal consistency, test-retest reliability, factor structure, and measurement invariance) in 15 self-report questionnaires, some of which, such as the Big Five Inventory ( John & Srivastava, 1999) and the Rosenberg Self-Esteem Scale (Rosenberg, 1965), are very popular. In this Commentary, we argue that (a) their claim that measurement issues like these are ignored is incorrect, (b) the models they used to test structural validity do not match the construct space for many of the measures, and (c) their analyses and conclusions regarding measurement invariance were needlessly limited to a dichotomous decision rule. First, we believe it is important to note that we are in agreement with the sentiment behind Hussey and Hughes’s study and the previous work that appeared to inspire it (Flake, Pek, & Hehman, 2017). Measurement issues are seldom the focus of the articles published in the top journals in personality and social psychology, and the quality of the measures used by researchers is not a top priority in evaluating the value of the research. Furthermore, the use of ad hoc measures is common in some fields. Nonetheless, we disagree with the authors’ analyses, interpretations, and conclusions concerning the validity of these 15 specific measures for the three reasons we discuss here.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"505 - 508"},"PeriodicalIF":13.6,"publicationDate":"2020-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920957618","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44972857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-13DOI: 10.1177/25152459221095823
Anna C. Wysocki, K. Lawson, M. Rhemtulla
It is common practice in correlational or quasiexperimental studies to use statistical control to remove confounding effects from a regression coefficient. Controlling for relevant confounders can debias the estimated causal effect of a predictor on an outcome; that is, it can bring the estimated regression coefficient closer to the value of the true causal effect. But statistical control works only under ideal circumstances. When the selected control variables are inappropriate, controlling can result in estimates that are more biased than uncontrolled estimates. Despite the ubiquity of statistical control in published regression analyses and the consequences of controlling for inappropriate third variables, the selection of control variables is rarely explicitly justified in print. We argue that to carefully select appropriate control variables, researchers must propose and defend a causal structure that includes the outcome, predictors, and plausible confounders. We underscore the importance of causality when selecting control variables by demonstrating how regression coefficients are affected by controlling for appropriate and inappropriate variables. Finally, we provide practical recommendations for applied researchers who wish to use statistical control.
{"title":"Statistical Control Requires Causal Justification","authors":"Anna C. Wysocki, K. Lawson, M. Rhemtulla","doi":"10.1177/25152459221095823","DOIUrl":"https://doi.org/10.1177/25152459221095823","url":null,"abstract":"It is common practice in correlational or quasiexperimental studies to use statistical control to remove confounding effects from a regression coefficient. Controlling for relevant confounders can debias the estimated causal effect of a predictor on an outcome; that is, it can bring the estimated regression coefficient closer to the value of the true causal effect. But statistical control works only under ideal circumstances. When the selected control variables are inappropriate, controlling can result in estimates that are more biased than uncontrolled estimates. Despite the ubiquity of statistical control in published regression analyses and the consequences of controlling for inappropriate third variables, the selection of control variables is rarely explicitly justified in print. We argue that to carefully select appropriate control variables, researchers must propose and defend a causal structure that includes the outcome, predictors, and plausible confounders. We underscore the importance of causality when selecting control variables by demonstrating how regression coefficients are affected by controlling for appropriate and inappropriate variables. Finally, we provide practical recommendations for applied researchers who wish to use statistical control.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"5 1","pages":""},"PeriodicalIF":13.6,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45832531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-09DOI: 10.1177/2515245920922982
J. Grice, Eliwid Medellin, Ian Jones, Samantha Horvath, Hailey McDaniel, Chance O’lansen, Meggie Baker
Traditional indices of effect size are designed to answer questions about average group differences, associations between variables, and relative risk. For many researchers, an additional, important question is, “How many people in my study behaved or responded in a manner consistent with theoretical expectation?” We show how the answer to this question can be computed and reported as a straightforward percentage for a wide variety of study designs. This percentage essentially treats persons as an effect size, and it can easily be understood by scientists, professionals, and laypersons alike. For instance, imagine that in addition to d or η2, a researcher reports that 80% of participants matched theoretical expectation. No statistical training is required to understand the basic meaning of this percentage. By analyzing recently published studies, we show how computing this percentage can reveal novel patterns within data that provide insights for extending and developing the theory under investigation.
{"title":"Persons as Effect Sizes","authors":"J. Grice, Eliwid Medellin, Ian Jones, Samantha Horvath, Hailey McDaniel, Chance O’lansen, Meggie Baker","doi":"10.1177/2515245920922982","DOIUrl":"https://doi.org/10.1177/2515245920922982","url":null,"abstract":"Traditional indices of effect size are designed to answer questions about average group differences, associations between variables, and relative risk. For many researchers, an additional, important question is, “How many people in my study behaved or responded in a manner consistent with theoretical expectation?” We show how the answer to this question can be computed and reported as a straightforward percentage for a wide variety of study designs. This percentage essentially treats persons as an effect size, and it can easily be understood by scientists, professionals, and laypersons alike. For instance, imagine that in addition to d or η2, a researcher reports that 80% of participants matched theoretical expectation. No statistical training is required to understand the basic meaning of this percentage. By analyzing recently published studies, we show how computing this percentage can reveal novel patterns within data that provide insights for extending and developing the theory under investigation.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"443 - 455"},"PeriodicalIF":13.6,"publicationDate":"2020-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920922982","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44876063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-07DOI: 10.1177/2515245920957607
J. Sauer, A. Drummond
Funder and Ozer (2019) argued that small effects can haveimportant implications in cumulative long-run scenarios.We certainly agree. However, some important caveatsmerit explicit consideration. We elaborate on the previously acknowledged importance of preregistration (andopen-data practices) and identify two additional considerations for interpreting small effects in long-run scenarios: restricted extrapolation and construct validity
{"title":"Boundary Conditions for the Practical Importance of Small Effects in Long Runs: A Comment on Funder and Ozer (2019)","authors":"J. Sauer, A. Drummond","doi":"10.1177/2515245920957607","DOIUrl":"https://doi.org/10.1177/2515245920957607","url":null,"abstract":"Funder and Ozer (2019) argued that small effects can haveimportant implications in cumulative long-run scenarios.We certainly agree. However, some important caveatsmerit explicit consideration. We elaborate on the previously acknowledged importance of preregistration (andopen-data practices) and identify two additional considerations for interpreting small effects in long-run scenarios: restricted extrapolation and construct validity","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"502 - 504"},"PeriodicalIF":13.6,"publicationDate":"2020-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920957607","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49101096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/2515245920953350
L. Lazarević, D. Purić, I. Žeželj, Radomir Belopavlović, Bojana Bodroža, Marija Čolić, C. Ebersole, Máire B Ford, Ana Orlić, Ivana Pedović, B. Petrović, A. Shabazian, Darko Stojilović
Across three studies, LoBue and DeLoache (2008) provided evidence suggesting that both young children and adults exhibit enhanced visual detection of evolutionarily relevant threat stimuli (as compared with nonthreatening stimuli). A replication of their Experiment 3, conducted by Cramblet Alvarez and Pipitone (2015) as part of the Reproducibility Project: Psychology (RP:P), demonstrated trends similar to those of the original study, but the effect sizes were smaller and not statistically significant. There were, however, some methodological differences (e.g., screen size) and sampling differences (the age of recruited children) between the original study and the RP:P replication study. Additionally, LoBue and DeLoache expressed concern over the choice of stimuli used in the RP:P replication. We sought to explore the possible moderating effects of these factors by conducting two new replications—one using the protocol from the RP:P and the other using a revised protocol. We collected data at four sites, three in Serbia and one in the United States (total N = 553). Overall, participants were not significantly faster at detecting threatening stimuli. Thus, results were not supportive of the hypothesis that visual detection of evolutionarily relevant threat stimuli is enhanced in young children. The effect from the RP:P protocol (d = −0.10, 95% confidence interval = [−1.02, 0.82]) was similar to the effect from the revised protocol (d = −0.09, 95% confidence interval = [−0.33, 0.15]), and the results from both the RP:P and the revised protocols were more similar to those found by Cramblet Alvarez and Pipitone than to those found by LoBue and DeLoache.
{"title":"Many Labs 5: Registered Replication of LoBue and DeLoache (2008)","authors":"L. Lazarević, D. Purić, I. Žeželj, Radomir Belopavlović, Bojana Bodroža, Marija Čolić, C. Ebersole, Máire B Ford, Ana Orlić, Ivana Pedović, B. Petrović, A. Shabazian, Darko Stojilović","doi":"10.1177/2515245920953350","DOIUrl":"https://doi.org/10.1177/2515245920953350","url":null,"abstract":"Across three studies, LoBue and DeLoache (2008) provided evidence suggesting that both young children and adults exhibit enhanced visual detection of evolutionarily relevant threat stimuli (as compared with nonthreatening stimuli). A replication of their Experiment 3, conducted by Cramblet Alvarez and Pipitone (2015) as part of the Reproducibility Project: Psychology (RP:P), demonstrated trends similar to those of the original study, but the effect sizes were smaller and not statistically significant. There were, however, some methodological differences (e.g., screen size) and sampling differences (the age of recruited children) between the original study and the RP:P replication study. Additionally, LoBue and DeLoache expressed concern over the choice of stimuli used in the RP:P replication. We sought to explore the possible moderating effects of these factors by conducting two new replications—one using the protocol from the RP:P and the other using a revised protocol. We collected data at four sites, three in Serbia and one in the United States (total N = 553). Overall, participants were not significantly faster at detecting threatening stimuli. Thus, results were not supportive of the hypothesis that visual detection of evolutionarily relevant threat stimuli is enhanced in young children. The effect from the RP:P protocol (d = −0.10, 95% confidence interval = [−1.02, 0.82]) was similar to the effect from the revised protocol (d = −0.09, 95% confidence interval = [−0.33, 0.15]), and the results from both the RP:P and the revised protocols were more similar to those found by Cramblet Alvarez and Pipitone than to those found by LoBue and DeLoache.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"377 - 386"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920953350","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42166078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/2515245920945963
Christopher R. Chartier, J. Arnal, Holly Arrow, Nicholas G. Bloxsom, D. Bonfiglio, C. Brumbaugh, Katherine S. Corker, C. Ebersole, Alexander Garinther, S. Giessner, Sean Hughes, M. Inzlicht, Hause Lin, Brett Mercier, Mitchell M. Metzger, D. Rangel, Blair Saunders, Kathleen Schmidt, Daniel Storage, Carly Tocco
In Experiment 5 of Albarracín et al. (2008), participants primed with words associated with action performed better on a subsequent cognitive task than did participants primed with words associated with inaction. A direct replication attempt by Frank, Kim, and Lee (2016) as part of the Reproducibility Project: Psychology (RP:P) failed to find evidence for this effect. In this article, we discuss several potential explanations for these discrepant findings: the source of participants (Amazon’s Mechanical Turk vs. traditional undergraduate-student pool), the setting of participation (online vs. in lab), and the possible moderating role of affect. We tested Albarracín et al.’s original hypothesis in two new samples: For the first sample, we followed the protocol developed by Frank et al. and recruited participants via Amazon’s Mechanical Turk (n = 580). For the second sample, we used a revised protocol incorporating feedback from the original authors and recruited participants from eight universities (n = 884). We did not detect moderation by protocol; patterns in the revised protocol resembled those in our implementation of the RP:P protocol, but the estimate of the focal effect size was smaller than that found originally by Albarracín et al. and larger than that found in Frank et al.’s replication attempt. We discuss these findings and possible explanations.
在Albarracín et al.(2008)的实验5中,被启动与行动相关词汇的参与者在随后的认知任务中比被启动与不作为相关词汇的参与者表现得更好。Frank, Kim和Lee(2016)作为可重复性项目:心理学(RP:P)的一部分进行的直接复制尝试未能找到这种效应的证据。在本文中,我们讨论了对这些差异发现的几种可能的解释:参与者的来源(亚马逊的土耳其机械vs传统的本科生池),参与的设置(在线vs在实验室),以及情感可能的调节作用。我们在两个新样本中测试了Albarracín等人的原始假设:对于第一个样本,我们遵循Frank等人制定的协议,并通过亚马逊的土耳其机器人(n = 580)招募参与者。对于第二个样本,我们使用了一个修订后的方案,纳入了原作者的反馈,并从8所大学招募了参与者(n = 884)。我们没有通过协议检测到适度;修订后方案中的模式与我们实施RP:P方案中的模式相似,但对焦点效应大小的估计小于Albarracín等人最初发现的结果,而大于Frank等人的复制尝试。我们讨论这些发现和可能的解释。
{"title":"Many Labs 5: Registered Replication of Albarracín et al. (2008), Experiment 5","authors":"Christopher R. Chartier, J. Arnal, Holly Arrow, Nicholas G. Bloxsom, D. Bonfiglio, C. Brumbaugh, Katherine S. Corker, C. Ebersole, Alexander Garinther, S. Giessner, Sean Hughes, M. Inzlicht, Hause Lin, Brett Mercier, Mitchell M. Metzger, D. Rangel, Blair Saunders, Kathleen Schmidt, Daniel Storage, Carly Tocco","doi":"10.1177/2515245920945963","DOIUrl":"https://doi.org/10.1177/2515245920945963","url":null,"abstract":"In Experiment 5 of Albarracín et al. (2008), participants primed with words associated with action performed better on a subsequent cognitive task than did participants primed with words associated with inaction. A direct replication attempt by Frank, Kim, and Lee (2016) as part of the Reproducibility Project: Psychology (RP:P) failed to find evidence for this effect. In this article, we discuss several potential explanations for these discrepant findings: the source of participants (Amazon’s Mechanical Turk vs. traditional undergraduate-student pool), the setting of participation (online vs. in lab), and the possible moderating role of affect. We tested Albarracín et al.’s original hypothesis in two new samples: For the first sample, we followed the protocol developed by Frank et al. and recruited participants via Amazon’s Mechanical Turk (n = 580). For the second sample, we used a revised protocol incorporating feedback from the original authors and recruited participants from eight universities (n = 884). We did not detect moderation by protocol; patterns in the revised protocol resembled those in our implementation of the RP:P protocol, but the estimate of the focal effect size was smaller than that found originally by Albarracín et al. and larger than that found in Frank et al.’s replication attempt. We discuss these findings and possible explanations.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"332 - 339"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920945963","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43659543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/2515245920925750
Katherine S. Corker, J. Arnal, D. Bonfiglio, P. Curran, Christopher R. Chartier, W. Chopik, R. Guadagno, Amanda M. Kimbrough, Kathleen Schmidt, B. Wiggins
Albarracín et al. (2008, Experiment 7) tested whether priming action or inaction goals (vs. no goal) and then satisfying those goals (vs. not satisfying them) would be associated with subsequent cognitive responding. They hypothesized and found that priming action or inaction goals that were not satisfied resulted in greater or lesser responding, respectively, compared with not priming goals (N = 98). Sonnleitner and Voracek (2015) attempted to directly replicate Albarracín et al.’s (2008) study with German participants (N = 105). They did not find evidence for the 3 × 2 interaction or the expected main effect of task type. The current study attempted to directly replicate Albarracín et al. (2008), Experiment 7, with a larger sample of participants (N = 1,690) from seven colleges and universities in the United States. We also extended the study design by using a scrambled-sentence task to prime goals instead of the original task of completing word fragments, allowing us to test whether study protocol moderated any effects of interest. We did not detect moderation by protocol in the full 3 × 2 × 2 design (pseudo-r2 = 0.05%). Results for both protocols were largely consistent with Sonnleitner and Voracek’s findings (pseudo-r2s = 0.14% and 0.50%). We consider these results in light of recent findings concerning priming methods and discuss the robustness of action-/inaction-goal priming to the implementation of different protocols in this particular context.
{"title":"Many Labs 5: Registered Replication of Albarracín et al. (2008), Experiment 7","authors":"Katherine S. Corker, J. Arnal, D. Bonfiglio, P. Curran, Christopher R. Chartier, W. Chopik, R. Guadagno, Amanda M. Kimbrough, Kathleen Schmidt, B. Wiggins","doi":"10.1177/2515245920925750","DOIUrl":"https://doi.org/10.1177/2515245920925750","url":null,"abstract":"Albarracín et al. (2008, Experiment 7) tested whether priming action or inaction goals (vs. no goal) and then satisfying those goals (vs. not satisfying them) would be associated with subsequent cognitive responding. They hypothesized and found that priming action or inaction goals that were not satisfied resulted in greater or lesser responding, respectively, compared with not priming goals (N = 98). Sonnleitner and Voracek (2015) attempted to directly replicate Albarracín et al.’s (2008) study with German participants (N = 105). They did not find evidence for the 3 × 2 interaction or the expected main effect of task type. The current study attempted to directly replicate Albarracín et al. (2008), Experiment 7, with a larger sample of participants (N = 1,690) from seven colleges and universities in the United States. We also extended the study design by using a scrambled-sentence task to prime goals instead of the original task of completing word fragments, allowing us to test whether study protocol moderated any effects of interest. We did not detect moderation by protocol in the full 3 × 2 × 2 design (pseudo-r2 = 0.05%). Results for both protocols were largely consistent with Sonnleitner and Voracek’s findings (pseudo-r2s = 0.14% and 0.50%). We consider these results in light of recent findings concerning priming methods and discuss the robustness of action-/inaction-goal priming to the implementation of different protocols in this particular context.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"340 - 352"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920925750","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44432448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/2515245919870737
H. Rabagliati, M. Corley, Benjamin R. Dering, P. Hancock, Josiah P J King, C. Levitan, J. Loy, Ailsa E. Millen
Crosby, Monin, and Richardson (2008) found that hearing an offensive remark caused subjects (N = 25) to look longer at a potentially offended person, but only if that person could hear the remark. On the basis of this result, they argued that people use social referencing to assess the offensiveness. However, in a direct replication in the Reproducibility Project: Psychology, the result for Crosby et al.’s key effect was not significant. In the current project, we tested whether the size of the social-referencing effect might be increased by a peer-reviewed and preregistered protocol manipulation in which some participants were given context to understand why the remark was potentially offensive. Three labs in Europe and the United States (N = 283) took part. The protocol manipulation did not affect the size of the social-referencing effect. However, we did replicate the original effect reported by Crosby et al., albeit with a much smaller effect size. We discuss these results in the context of ongoing debates about how replication attempts should treat statistical power and contextual sensitivity.
{"title":"Many Labs 5: Registered Replication of Crosby, Monin, and Richardson (2008)","authors":"H. Rabagliati, M. Corley, Benjamin R. Dering, P. Hancock, Josiah P J King, C. Levitan, J. Loy, Ailsa E. Millen","doi":"10.1177/2515245919870737","DOIUrl":"https://doi.org/10.1177/2515245919870737","url":null,"abstract":"Crosby, Monin, and Richardson (2008) found that hearing an offensive remark caused subjects (N = 25) to look longer at a potentially offended person, but only if that person could hear the remark. On the basis of this result, they argued that people use social referencing to assess the offensiveness. However, in a direct replication in the Reproducibility Project: Psychology, the result for Crosby et al.’s key effect was not significant. In the current project, we tested whether the size of the social-referencing effect might be increased by a peer-reviewed and preregistered protocol manipulation in which some participants were given context to understand why the remark was potentially offensive. Three labs in Europe and the United States (N = 283) took part. The protocol manipulation did not affect the size of the social-referencing effect. However, we did replicate the original effect reported by Crosby et al., albeit with a much smaller effect size. We discuss these results in the context of ongoing debates about how replication attempts should treat statistical power and contextual sensitivity.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"353 - 365"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245919870737","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43192977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}