首页 > 最新文献

Advances in Methods and Practices in Psychological Science最新文献

英文 中文
Your Coefficient Alpha Is Probably Wrong, but Which Coefficient Omega Is Right? A Tutorial on Using R to Obtain Better Reliability Estimates 你的系数可能是错的,但哪个系数是对的呢?关于使用R获得更好的可靠性估计的教程
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2020-11-06 DOI: 10.1177/2515245920951747
D. Flora
Measurement quality has recently been highlighted as an important concern for advancing a cumulative psychological science. An implication is that researchers should move beyond mechanistically reporting coefficient alpha toward more carefully assessing the internal structure and reliability of multi-item scales. Yet a researcher may be discouraged upon discovering that a prominent alternative to alpha, namely, coefficient omega, can be calculated in a variety of ways. In this Tutorial, I alleviate this potential confusion by describing alternative forms of omega and providing guidelines for choosing an appropriate omega estimate pertaining to the measurement of a target construct represented with a confirmatory factor analysis model. Several applied examples demonstrate how to compute different forms of omega in R.
测量质量最近被强调为一个重要的关注,以推进累积心理科学。这意味着研究人员应该超越机械地报告系数,更仔细地评估多项目量表的内部结构和可靠性。然而,当研究人员发现可以用多种方法计算alpha的显著替代值,即系数omega时,可能会感到气馁。在本教程中,我将通过描述omega的替代形式,并提供指导原则,以选择与用验证性因子分析模型表示的目标结构的测量相关的适当omega估计,从而减轻这种潜在的混淆。几个应用实例演示了如何计算R中不同形式的。
{"title":"Your Coefficient Alpha Is Probably Wrong, but Which Coefficient Omega Is Right? A Tutorial on Using R to Obtain Better Reliability Estimates","authors":"D. Flora","doi":"10.1177/2515245920951747","DOIUrl":"https://doi.org/10.1177/2515245920951747","url":null,"abstract":"Measurement quality has recently been highlighted as an important concern for advancing a cumulative psychological science. An implication is that researchers should move beyond mechanistically reporting coefficient alpha toward more carefully assessing the internal structure and reliability of multi-item scales. Yet a researcher may be discouraged upon discovering that a prominent alternative to alpha, namely, coefficient omega, can be calculated in a variety of ways. In this Tutorial, I alleviate this potential confusion by describing alternative forms of omega and providing guidelines for choosing an appropriate omega estimate pertaining to the measurement of a target construct represented with a confirmatory factor analysis model. Several applied examples demonstrate how to compute different forms of omega in R.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"484 - 501"},"PeriodicalIF":13.6,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920951747","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46361358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 163
The Role of Human Fallibility in Psychological Research: A Survey of Mistakes in Data Management 人的易错性在心理学研究中的作用:数据管理中的错误调查
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2020-11-05 DOI: 10.1177/25152459211045930
Márton Kovács, Rink Hoekstra, B. Aczel
Errors are an inevitable consequence of human fallibility, and researchers are no exception. Most researchers can recall major frustrations or serious time delays due to human errors while collecting, analyzing, or reporting data. The present study is an exploration of mistakes made during the data-management process in psychological research. We surveyed 488 researchers regarding the type, frequency, seriousness, and outcome of mistakes that have occurred in their research team during the last 5 years. The majority of respondents suggested that mistakes occurred with very low or low frequency. Most respondents reported that the most frequent mistakes led to insignificant or minor consequences, such as time loss or frustration. The most serious mistakes caused insignificant or minor consequences for about a third of respondents, moderate consequences for almost half of respondents, and major or extreme consequences for about one fifth of respondents. The most frequently reported types of mistakes were ambiguous naming/defining of data, version control error, and wrong data processing/analysis. Most mistakes were reportedly due to poor project preparation or management and/or personal difficulties (physical or cognitive constraints). With these initial exploratory findings, we do not aim to provide a description representative for psychological scientists but, rather, to lay the groundwork for a systematic investigation of human fallibility in research data management and the development of solutions to reduce errors and mitigate their impact.
错误是人类易犯错误的必然结果,研究人员也不例外。大多数研究人员都能回忆起在收集、分析或报告数据时由于人为错误而造成的重大挫折或严重的时间延迟。本研究是对心理学研究中数据管理过程中所犯错误的探索。我们调查了488名研究人员,了解他们的研究团队在过去5年中发生的错误的类型、频率、严重性和结果。大多数受访者表示,错误发生的频率很低或很低。大多数受访者表示,最频繁的错误会导致微不足道或轻微的后果,如时间损失或沮丧。最严重的错误对约三分之一的受访者造成微不足道或轻微的后果,对近一半的受访者造成中度后果,对约五分之一的人造成重大或极端后果。最常报告的错误类型是数据命名/定义不明确、版本控制错误和错误的数据处理/分析。据报道,大多数错误是由于项目准备或管理不善和/或个人困难(身体或认知限制)造成的。有了这些初步的探索性发现,我们的目的不是为心理科学家提供一个有代表性的描述,而是为系统调查研究数据管理中的人类易犯错误性以及开发减少错误并减轻其影响的解决方案奠定基础。
{"title":"The Role of Human Fallibility in Psychological Research: A Survey of Mistakes in Data Management","authors":"Márton Kovács, Rink Hoekstra, B. Aczel","doi":"10.1177/25152459211045930","DOIUrl":"https://doi.org/10.1177/25152459211045930","url":null,"abstract":"Errors are an inevitable consequence of human fallibility, and researchers are no exception. Most researchers can recall major frustrations or serious time delays due to human errors while collecting, analyzing, or reporting data. The present study is an exploration of mistakes made during the data-management process in psychological research. We surveyed 488 researchers regarding the type, frequency, seriousness, and outcome of mistakes that have occurred in their research team during the last 5 years. The majority of respondents suggested that mistakes occurred with very low or low frequency. Most respondents reported that the most frequent mistakes led to insignificant or minor consequences, such as time loss or frustration. The most serious mistakes caused insignificant or minor consequences for about a third of respondents, moderate consequences for almost half of respondents, and major or extreme consequences for about one fifth of respondents. The most frequently reported types of mistakes were ambiguous naming/defining of data, version control error, and wrong data processing/analysis. Most mistakes were reportedly due to poor project preparation or management and/or personal difficulties (physical or cognitive constraints). With these initial exploratory findings, we do not aim to provide a description representative for psychological scientists but, rather, to lay the groundwork for a systematic investigation of human fallibility in research data management and the development of solutions to reduce errors and mitigate their impact.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2020-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42747473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Commentary on Hussey and Hughes (2020): Hidden Invalidity Among 15 Commonly Used Measures in Social and Personality Psychology Hussey和Hughes(2020)述评:社会和人格心理学常用的15种衡量标准中的隐性无效
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2020-10-15 DOI: 10.1177/2515245920957618
Eunike Wetzel, B. Roberts
Hussey and Hughes (2020) analyzed four aspects relevant to the structural validity of a psychological scale (internal consistency, test-retest reliability, factor structure, and measurement invariance) in 15 self-report questionnaires, some of which, such as the Big Five Inventory ( John & Srivastava, 1999) and the Rosenberg Self-Esteem Scale (Rosenberg, 1965), are very popular. In this Commentary, we argue that (a) their claim that measurement issues like these are ignored is incorrect, (b) the models they used to test structural validity do not match the construct space for many of the measures, and (c) their analyses and conclusions regarding measurement invariance were needlessly limited to a dichotomous decision rule. First, we believe it is important to note that we are in agreement with the sentiment behind Hussey and Hughes’s study and the previous work that appeared to inspire it (Flake, Pek, & Hehman, 2017). Measurement issues are seldom the focus of the articles published in the top journals in personality and social psychology, and the quality of the measures used by researchers is not a top priority in evaluating the value of the research. Furthermore, the use of ad hoc measures is common in some fields. Nonetheless, we disagree with the authors’ analyses, interpretations, and conclusions concerning the validity of these 15 specific measures for the three reasons we discuss here.
Hussey和Hughes(2020)在15份自我报告问卷中分析了与心理量表结构有效性相关的四个方面(内部一致性、重测可靠性、因素结构和测量不变性),其中一些问卷,如五大量表(John&Srivastava,1999)和Rosenberg自尊量表(Rosenberg,1965),非常受欢迎。在这篇评论中,我们认为(a)他们声称忽略了这样的测量问题是不正确的,(b)他们用来测试结构有效性的模型与许多测量的构造空间不匹配,以及(c)他们关于测量不变性的分析和结论不必要地局限于二分法决策规则。首先,我们认为重要的是要注意,我们同意Hussey和Hughes的研究背后的观点,以及之前的工作(Flake,Pek,&Hehman,2017)。在人格和社会心理学的顶级期刊上发表的文章很少关注测量问题,研究人员使用的测量方法的质量也不是评估研究价值的首要任务。此外,在某些领域,使用特别措施是很常见的。尽管如此,由于我们在这里讨论的三个原因,我们不同意作者关于这15项具体措施有效性的分析、解释和结论。
{"title":"Commentary on Hussey and Hughes (2020): Hidden Invalidity Among 15 Commonly Used Measures in Social and Personality Psychology","authors":"Eunike Wetzel, B. Roberts","doi":"10.1177/2515245920957618","DOIUrl":"https://doi.org/10.1177/2515245920957618","url":null,"abstract":"Hussey and Hughes (2020) analyzed four aspects relevant to the structural validity of a psychological scale (internal consistency, test-retest reliability, factor structure, and measurement invariance) in 15 self-report questionnaires, some of which, such as the Big Five Inventory ( John & Srivastava, 1999) and the Rosenberg Self-Esteem Scale (Rosenberg, 1965), are very popular. In this Commentary, we argue that (a) their claim that measurement issues like these are ignored is incorrect, (b) the models they used to test structural validity do not match the construct space for many of the measures, and (c) their analyses and conclusions regarding measurement invariance were needlessly limited to a dichotomous decision rule. First, we believe it is important to note that we are in agreement with the sentiment behind Hussey and Hughes’s study and the previous work that appeared to inspire it (Flake, Pek, & Hehman, 2017). Measurement issues are seldom the focus of the articles published in the top journals in personality and social psychology, and the quality of the measures used by researchers is not a top priority in evaluating the value of the research. Furthermore, the use of ad hoc measures is common in some fields. Nonetheless, we disagree with the authors’ analyses, interpretations, and conclusions concerning the validity of these 15 specific measures for the three reasons we discuss here.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"505 - 508"},"PeriodicalIF":13.6,"publicationDate":"2020-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920957618","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44972857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Statistical Control Requires Causal Justification 统计控制需要因果证明
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2020-10-13 DOI: 10.1177/25152459221095823
Anna C. Wysocki, K. Lawson, M. Rhemtulla
It is common practice in correlational or quasiexperimental studies to use statistical control to remove confounding effects from a regression coefficient. Controlling for relevant confounders can debias the estimated causal effect of a predictor on an outcome; that is, it can bring the estimated regression coefficient closer to the value of the true causal effect. But statistical control works only under ideal circumstances. When the selected control variables are inappropriate, controlling can result in estimates that are more biased than uncontrolled estimates. Despite the ubiquity of statistical control in published regression analyses and the consequences of controlling for inappropriate third variables, the selection of control variables is rarely explicitly justified in print. We argue that to carefully select appropriate control variables, researchers must propose and defend a causal structure that includes the outcome, predictors, and plausible confounders. We underscore the importance of causality when selecting control variables by demonstrating how regression coefficients are affected by controlling for appropriate and inappropriate variables. Finally, we provide practical recommendations for applied researchers who wish to use statistical control.
在相关或准实验研究中,使用统计控制从回归系数中去除混杂效应是常见的做法。控制相关混杂因素可以消除预测因子对结果的估计因果效应;也就是说,它可以使估计的回归系数更接近真实因果效应的值。但是统计控制只在理想的情况下起作用。当选择的控制变量不合适时,控制可能导致估计比不受控制的估计更有偏差。尽管统计控制在已发表的回归分析和控制不适当的第三变量的后果中无处不在,但控制变量的选择很少在印刷品中明确证明是合理的。我们认为,要仔细选择适当的控制变量,研究人员必须提出并捍卫一个因果结构,包括结果、预测因素和合理的混杂因素。在选择控制变量时,我们强调因果关系的重要性,通过展示回归系数如何受到控制适当和不适当变量的影响。最后,我们为希望使用统计控制的应用研究人员提供实用建议。
{"title":"Statistical Control Requires Causal Justification","authors":"Anna C. Wysocki, K. Lawson, M. Rhemtulla","doi":"10.1177/25152459221095823","DOIUrl":"https://doi.org/10.1177/25152459221095823","url":null,"abstract":"It is common practice in correlational or quasiexperimental studies to use statistical control to remove confounding effects from a regression coefficient. Controlling for relevant confounders can debias the estimated causal effect of a predictor on an outcome; that is, it can bring the estimated regression coefficient closer to the value of the true causal effect. But statistical control works only under ideal circumstances. When the selected control variables are inappropriate, controlling can result in estimates that are more biased than uncontrolled estimates. Despite the ubiquity of statistical control in published regression analyses and the consequences of controlling for inappropriate third variables, the selection of control variables is rarely explicitly justified in print. We argue that to carefully select appropriate control variables, researchers must propose and defend a causal structure that includes the outcome, predictors, and plausible confounders. We underscore the importance of causality when selecting control variables by demonstrating how regression coefficients are affected by controlling for appropriate and inappropriate variables. Finally, we provide practical recommendations for applied researchers who wish to use statistical control.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"5 1","pages":""},"PeriodicalIF":13.6,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45832531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Persons as Effect Sizes 人作为效应量
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2020-10-09 DOI: 10.1177/2515245920922982
J. Grice, Eliwid Medellin, Ian Jones, Samantha Horvath, Hailey McDaniel, Chance O’lansen, Meggie Baker
Traditional indices of effect size are designed to answer questions about average group differences, associations between variables, and relative risk. For many researchers, an additional, important question is, “How many people in my study behaved or responded in a manner consistent with theoretical expectation?” We show how the answer to this question can be computed and reported as a straightforward percentage for a wide variety of study designs. This percentage essentially treats persons as an effect size, and it can easily be understood by scientists, professionals, and laypersons alike. For instance, imagine that in addition to d or η2, a researcher reports that 80% of participants matched theoretical expectation. No statistical training is required to understand the basic meaning of this percentage. By analyzing recently published studies, we show how computing this percentage can reveal novel patterns within data that provide insights for extending and developing the theory under investigation.
传统的效应大小指数被设计用来回答关于平均组差异、变量之间的关联和相对风险的问题。对许多研究人员来说,另一个重要的问题是,“在我的研究中,有多少人的行为或反应与理论预期一致?”我们展示了如何计算这个问题的答案,并在各种研究设计中以直接的百分比报告。这个百分比基本上把人当作一个效应量,它很容易被科学家、专业人士和外行人理解。例如,假设除了d或η2之外,研究人员报告说80%的参与者符合理论预期。不需要统计学训练就能理解这个百分比的基本含义。通过分析最近发表的研究,我们展示了计算这个百分比如何揭示数据中的新模式,为扩展和发展正在研究的理论提供见解。
{"title":"Persons as Effect Sizes","authors":"J. Grice, Eliwid Medellin, Ian Jones, Samantha Horvath, Hailey McDaniel, Chance O’lansen, Meggie Baker","doi":"10.1177/2515245920922982","DOIUrl":"https://doi.org/10.1177/2515245920922982","url":null,"abstract":"Traditional indices of effect size are designed to answer questions about average group differences, associations between variables, and relative risk. For many researchers, an additional, important question is, “How many people in my study behaved or responded in a manner consistent with theoretical expectation?” We show how the answer to this question can be computed and reported as a straightforward percentage for a wide variety of study designs. This percentage essentially treats persons as an effect size, and it can easily be understood by scientists, professionals, and laypersons alike. For instance, imagine that in addition to d or η2, a researcher reports that 80% of participants matched theoretical expectation. No statistical training is required to understand the basic meaning of this percentage. By analyzing recently published studies, we show how computing this percentage can reveal novel patterns within data that provide insights for extending and developing the theory under investigation.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"443 - 455"},"PeriodicalIF":13.6,"publicationDate":"2020-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920922982","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44876063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Boundary Conditions for the Practical Importance of Small Effects in Long Runs: A Comment on Funder and Ozer (2019) 小效应在长期运行中实际重要性的边界条件——评Funder和Ozer (2019)
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2020-10-07 DOI: 10.1177/2515245920957607
J. Sauer, A. Drummond
Funder and Ozer (2019) argued that small effects can haveimportant implications in cumulative long-run scenarios.We certainly agree. However, some important caveatsmerit explicit consideration. We elaborate on the previously acknowledged importance of preregistration (andopen-data practices) and identify two additional considerations for interpreting small effects in long-run scenarios: restricted extrapolation and construct validity
Funder和Ozer(2019)认为,在累积的长期情景中,微小的影响可能会产生重要影响。我们当然同意。然而,一些重要的注意事项值得明确考虑。我们详细阐述了先前公认的预登记(和开放数据实践)的重要性,并确定了在长期情景中解释小影响的两个额外考虑因素:受限外推和结构有效性
{"title":"Boundary Conditions for the Practical Importance of Small Effects in Long Runs: A Comment on Funder and Ozer (2019)","authors":"J. Sauer, A. Drummond","doi":"10.1177/2515245920957607","DOIUrl":"https://doi.org/10.1177/2515245920957607","url":null,"abstract":"Funder and Ozer (2019) argued that small effects can haveimportant implications in cumulative long-run scenarios.We certainly agree. However, some important caveatsmerit explicit consideration. We elaborate on the previously acknowledged importance of preregistration (andopen-data practices) and identify two additional considerations for interpreting small effects in long-run scenarios: restricted extrapolation and construct validity","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"502 - 504"},"PeriodicalIF":13.6,"publicationDate":"2020-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920957607","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49101096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Many Labs 5: Registered Replication of LoBue and DeLoache (2008) 许多实验室5:LoBue和DeLoache的注册复制(2008)
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2020-09-01 DOI: 10.1177/2515245920953350
L. Lazarević, D. Purić, I. Žeželj, Radomir Belopavlović, Bojana Bodroža, Marija Čolić, C. Ebersole, Máire B Ford, Ana Orlić, Ivana Pedović, B. Petrović, A. Shabazian, Darko Stojilović
Across three studies, LoBue and DeLoache (2008) provided evidence suggesting that both young children and adults exhibit enhanced visual detection of evolutionarily relevant threat stimuli (as compared with nonthreatening stimuli). A replication of their Experiment 3, conducted by Cramblet Alvarez and Pipitone (2015) as part of the Reproducibility Project: Psychology (RP:P), demonstrated trends similar to those of the original study, but the effect sizes were smaller and not statistically significant. There were, however, some methodological differences (e.g., screen size) and sampling differences (the age of recruited children) between the original study and the RP:P replication study. Additionally, LoBue and DeLoache expressed concern over the choice of stimuli used in the RP:P replication. We sought to explore the possible moderating effects of these factors by conducting two new replications—one using the protocol from the RP:P and the other using a revised protocol. We collected data at four sites, three in Serbia and one in the United States (total N = 553). Overall, participants were not significantly faster at detecting threatening stimuli. Thus, results were not supportive of the hypothesis that visual detection of evolutionarily relevant threat stimuli is enhanced in young children. The effect from the RP:P protocol (d = −0.10, 95% confidence interval = [−1.02, 0.82]) was similar to the effect from the revised protocol (d = −0.09, 95% confidence interval = [−0.33, 0.15]), and the results from both the RP:P and the revised protocols were more similar to those found by Cramblet Alvarez and Pipitone than to those found by LoBue and DeLoache.
在三项研究中,LoBue和DeLoache(2008)提供的证据表明,幼儿和成人对进化相关的威胁刺激(与非威胁刺激相比)都表现出更强的视觉检测能力。作为可重复性项目:心理学(RP:P)的一部分,Cramblet Alvarez和Pipitone(2015)对他们的实验3进行了重复,显示了与原始研究相似的趋势,但效应量较小,没有统计学意义。然而,在原始研究和RP:P重复研究之间存在一些方法上的差异(例如,屏幕大小)和抽样差异(招募儿童的年龄)。此外,LoBue和DeLoache对RP:P复制中使用的刺激选择表示关注。我们试图通过进行两个新的重复来探索这些因素可能的调节作用-一个使用来自RP:P的协议,另一个使用修订的协议。我们在四个地点收集数据,三个在塞尔维亚,一个在美国(总共N = 553)。总的来说,参与者在检测威胁刺激方面并没有明显更快。因此,结果不支持幼儿对进化相关威胁刺激的视觉检测增强的假设。RP:P方案的效果(d = - 0.10, 95%可信区间=[- 1.02,0.82])与修订方案的效果相似(d = - 0.09, 95%可信区间= [- 0.33,0.15]),RP:P和修订方案的结果与Cramblet Alvarez和Pipitone的结果更相似,而与LoBue和DeLoache的结果更相似。
{"title":"Many Labs 5: Registered Replication of LoBue and DeLoache (2008)","authors":"L. Lazarević, D. Purić, I. Žeželj, Radomir Belopavlović, Bojana Bodroža, Marija Čolić, C. Ebersole, Máire B Ford, Ana Orlić, Ivana Pedović, B. Petrović, A. Shabazian, Darko Stojilović","doi":"10.1177/2515245920953350","DOIUrl":"https://doi.org/10.1177/2515245920953350","url":null,"abstract":"Across three studies, LoBue and DeLoache (2008) provided evidence suggesting that both young children and adults exhibit enhanced visual detection of evolutionarily relevant threat stimuli (as compared with nonthreatening stimuli). A replication of their Experiment 3, conducted by Cramblet Alvarez and Pipitone (2015) as part of the Reproducibility Project: Psychology (RP:P), demonstrated trends similar to those of the original study, but the effect sizes were smaller and not statistically significant. There were, however, some methodological differences (e.g., screen size) and sampling differences (the age of recruited children) between the original study and the RP:P replication study. Additionally, LoBue and DeLoache expressed concern over the choice of stimuli used in the RP:P replication. We sought to explore the possible moderating effects of these factors by conducting two new replications—one using the protocol from the RP:P and the other using a revised protocol. We collected data at four sites, three in Serbia and one in the United States (total N = 553). Overall, participants were not significantly faster at detecting threatening stimuli. Thus, results were not supportive of the hypothesis that visual detection of evolutionarily relevant threat stimuli is enhanced in young children. The effect from the RP:P protocol (d = −0.10, 95% confidence interval = [−1.02, 0.82]) was similar to the effect from the revised protocol (d = −0.09, 95% confidence interval = [−0.33, 0.15]), and the results from both the RP:P and the revised protocols were more similar to those found by Cramblet Alvarez and Pipitone than to those found by LoBue and DeLoache.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"377 - 386"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920953350","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42166078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Many Labs 5: Registered Replication of Albarracín et al. (2008), Experiment 5 许多实验室5:Albarracín等人(2008)的注册复制,实验5
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2020-09-01 DOI: 10.1177/2515245920945963
Christopher R. Chartier, J. Arnal, Holly Arrow, Nicholas G. Bloxsom, D. Bonfiglio, C. Brumbaugh, Katherine S. Corker, C. Ebersole, Alexander Garinther, S. Giessner, Sean Hughes, M. Inzlicht, Hause Lin, Brett Mercier, Mitchell M. Metzger, D. Rangel, Blair Saunders, Kathleen Schmidt, Daniel Storage, Carly Tocco
In Experiment 5 of Albarracín et al. (2008), participants primed with words associated with action performed better on a subsequent cognitive task than did participants primed with words associated with inaction. A direct replication attempt by Frank, Kim, and Lee (2016) as part of the Reproducibility Project: Psychology (RP:P) failed to find evidence for this effect. In this article, we discuss several potential explanations for these discrepant findings: the source of participants (Amazon’s Mechanical Turk vs. traditional undergraduate-student pool), the setting of participation (online vs. in lab), and the possible moderating role of affect. We tested Albarracín et al.’s original hypothesis in two new samples: For the first sample, we followed the protocol developed by Frank et al. and recruited participants via Amazon’s Mechanical Turk (n = 580). For the second sample, we used a revised protocol incorporating feedback from the original authors and recruited participants from eight universities (n = 884). We did not detect moderation by protocol; patterns in the revised protocol resembled those in our implementation of the RP:P protocol, but the estimate of the focal effect size was smaller than that found originally by Albarracín et al. and larger than that found in Frank et al.’s replication attempt. We discuss these findings and possible explanations.
在Albarracín et al.(2008)的实验5中,被启动与行动相关词汇的参与者在随后的认知任务中比被启动与不作为相关词汇的参与者表现得更好。Frank, Kim和Lee(2016)作为可重复性项目:心理学(RP:P)的一部分进行的直接复制尝试未能找到这种效应的证据。在本文中,我们讨论了对这些差异发现的几种可能的解释:参与者的来源(亚马逊的土耳其机械vs传统的本科生池),参与的设置(在线vs在实验室),以及情感可能的调节作用。我们在两个新样本中测试了Albarracín等人的原始假设:对于第一个样本,我们遵循Frank等人制定的协议,并通过亚马逊的土耳其机器人(n = 580)招募参与者。对于第二个样本,我们使用了一个修订后的方案,纳入了原作者的反馈,并从8所大学招募了参与者(n = 884)。我们没有通过协议检测到适度;修订后方案中的模式与我们实施RP:P方案中的模式相似,但对焦点效应大小的估计小于Albarracín等人最初发现的结果,而大于Frank等人的复制尝试。我们讨论这些发现和可能的解释。
{"title":"Many Labs 5: Registered Replication of Albarracín et al. (2008), Experiment 5","authors":"Christopher R. Chartier, J. Arnal, Holly Arrow, Nicholas G. Bloxsom, D. Bonfiglio, C. Brumbaugh, Katherine S. Corker, C. Ebersole, Alexander Garinther, S. Giessner, Sean Hughes, M. Inzlicht, Hause Lin, Brett Mercier, Mitchell M. Metzger, D. Rangel, Blair Saunders, Kathleen Schmidt, Daniel Storage, Carly Tocco","doi":"10.1177/2515245920945963","DOIUrl":"https://doi.org/10.1177/2515245920945963","url":null,"abstract":"In Experiment 5 of Albarracín et al. (2008), participants primed with words associated with action performed better on a subsequent cognitive task than did participants primed with words associated with inaction. A direct replication attempt by Frank, Kim, and Lee (2016) as part of the Reproducibility Project: Psychology (RP:P) failed to find evidence for this effect. In this article, we discuss several potential explanations for these discrepant findings: the source of participants (Amazon’s Mechanical Turk vs. traditional undergraduate-student pool), the setting of participation (online vs. in lab), and the possible moderating role of affect. We tested Albarracín et al.’s original hypothesis in two new samples: For the first sample, we followed the protocol developed by Frank et al. and recruited participants via Amazon’s Mechanical Turk (n = 580). For the second sample, we used a revised protocol incorporating feedback from the original authors and recruited participants from eight universities (n = 884). We did not detect moderation by protocol; patterns in the revised protocol resembled those in our implementation of the RP:P protocol, but the estimate of the focal effect size was smaller than that found originally by Albarracín et al. and larger than that found in Frank et al.’s replication attempt. We discuss these findings and possible explanations.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"332 - 339"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920945963","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43659543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Many Labs 5: Registered Replication of Albarracín et al. (2008), Experiment 7 许多实验室5:Albarracín等人的注册复制(2008),实验7
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2020-09-01 DOI: 10.1177/2515245920925750
Katherine S. Corker, J. Arnal, D. Bonfiglio, P. Curran, Christopher R. Chartier, W. Chopik, R. Guadagno, Amanda M. Kimbrough, Kathleen Schmidt, B. Wiggins
Albarracín et al. (2008, Experiment 7) tested whether priming action or inaction goals (vs. no goal) and then satisfying those goals (vs. not satisfying them) would be associated with subsequent cognitive responding. They hypothesized and found that priming action or inaction goals that were not satisfied resulted in greater or lesser responding, respectively, compared with not priming goals (N = 98). Sonnleitner and Voracek (2015) attempted to directly replicate Albarracín et al.’s (2008) study with German participants (N = 105). They did not find evidence for the 3 × 2 interaction or the expected main effect of task type. The current study attempted to directly replicate Albarracín et al. (2008), Experiment 7, with a larger sample of participants (N = 1,690) from seven colleges and universities in the United States. We also extended the study design by using a scrambled-sentence task to prime goals instead of the original task of completing word fragments, allowing us to test whether study protocol moderated any effects of interest. We did not detect moderation by protocol in the full 3 × 2 × 2 design (pseudo-r2 = 0.05%). Results for both protocols were largely consistent with Sonnleitner and Voracek’s findings (pseudo-r2s = 0.14% and 0.50%). We consider these results in light of recent findings concerning priming methods and discuss the robustness of action-/inaction-goal priming to the implementation of different protocols in this particular context.
Albarracín等人(2008,实验7)测试了启动行动或不行动目标(与无目标相比),然后满足这些目标(与不满足这些目标相比)是否与随后的认知反应有关。他们假设并发现,与不启动目标相比,不满足的启动行动或不行动目标分别导致更大或更小的反应(N=98)。Sonnleitner和Voracek(2015)试图直接复制Albarracín等人(2008)对德国参与者的研究(n=105)。他们没有发现3×2交互作用或任务类型预期的主要影响的证据。目前的研究试图直接复制Albarracín等人(2008),实验7,来自美国七所学院和大学的更大样本(n=1690)。我们还扩展了研究设计,使用一个打乱的句子任务来完成首要目标,而不是完成单词片段的原始任务,使我们能够测试研究方案是否调节了任何兴趣的影响。在整个3×2×2设计中,我们没有检测到方案的调节作用(pseudo-r2=0.05%)。两个方案的结果基本上与Sonnleitner和Voracek的发现一致(pseudo-r2s=0.14%和0.50%)。我们根据最近关于启动方法的发现来考虑这些结果,并讨论了动作/不动作目标启动对不同协议。
{"title":"Many Labs 5: Registered Replication of Albarracín et al. (2008), Experiment 7","authors":"Katherine S. Corker, J. Arnal, D. Bonfiglio, P. Curran, Christopher R. Chartier, W. Chopik, R. Guadagno, Amanda M. Kimbrough, Kathleen Schmidt, B. Wiggins","doi":"10.1177/2515245920925750","DOIUrl":"https://doi.org/10.1177/2515245920925750","url":null,"abstract":"Albarracín et al. (2008, Experiment 7) tested whether priming action or inaction goals (vs. no goal) and then satisfying those goals (vs. not satisfying them) would be associated with subsequent cognitive responding. They hypothesized and found that priming action or inaction goals that were not satisfied resulted in greater or lesser responding, respectively, compared with not priming goals (N = 98). Sonnleitner and Voracek (2015) attempted to directly replicate Albarracín et al.’s (2008) study with German participants (N = 105). They did not find evidence for the 3 × 2 interaction or the expected main effect of task type. The current study attempted to directly replicate Albarracín et al. (2008), Experiment 7, with a larger sample of participants (N = 1,690) from seven colleges and universities in the United States. We also extended the study design by using a scrambled-sentence task to prime goals instead of the original task of completing word fragments, allowing us to test whether study protocol moderated any effects of interest. We did not detect moderation by protocol in the full 3 × 2 × 2 design (pseudo-r2 = 0.05%). Results for both protocols were largely consistent with Sonnleitner and Voracek’s findings (pseudo-r2s = 0.14% and 0.50%). We consider these results in light of recent findings concerning priming methods and discuss the robustness of action-/inaction-goal priming to the implementation of different protocols in this particular context.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"340 - 352"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920925750","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44432448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Many Labs 5: Registered Replication of Crosby, Monin, and Richardson (2008) 许多实验室5:Crosby, Monin和Richardson的注册复制(2008)
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2020-09-01 DOI: 10.1177/2515245919870737
H. Rabagliati, M. Corley, Benjamin R. Dering, P. Hancock, Josiah P J King, C. Levitan, J. Loy, Ailsa E. Millen
Crosby, Monin, and Richardson (2008) found that hearing an offensive remark caused subjects (N = 25) to look longer at a potentially offended person, but only if that person could hear the remark. On the basis of this result, they argued that people use social referencing to assess the offensiveness. However, in a direct replication in the Reproducibility Project: Psychology, the result for Crosby et al.’s key effect was not significant. In the current project, we tested whether the size of the social-referencing effect might be increased by a peer-reviewed and preregistered protocol manipulation in which some participants were given context to understand why the remark was potentially offensive. Three labs in Europe and the United States (N = 283) took part. The protocol manipulation did not affect the size of the social-referencing effect. However, we did replicate the original effect reported by Crosby et al., albeit with a much smaller effect size. We discuss these results in the context of ongoing debates about how replication attempts should treat statistical power and contextual sensitivity.
Crosby, Monin和Richardson(2008)发现,听到冒犯性的评论会导致受试者(N = 25)更长时间地盯着一个可能被冒犯的人,但前提是那个人能听到这句话。基于这一结果,他们认为人们使用社会参照来评估冒犯性。然而,在可重复性项目:心理学的直接复制中,结果对于Crosby等人的关键效应并不显著。在当前的项目中,我们测试了社会参照效应的大小是否可以通过同行评审和预先注册的协议操作来增加,其中一些参与者被告知上下文以理解为什么评论可能具有攻击性。欧洲和美国的三个实验室(N = 283)参与了这项研究。协议操作不影响社会参照效应的大小。然而,我们确实复制了Crosby等人报道的原始效应,尽管效应大小要小得多。我们在关于复制尝试应该如何对待统计能力和上下文敏感性的持续争论的背景下讨论这些结果。
{"title":"Many Labs 5: Registered Replication of Crosby, Monin, and Richardson (2008)","authors":"H. Rabagliati, M. Corley, Benjamin R. Dering, P. Hancock, Josiah P J King, C. Levitan, J. Loy, Ailsa E. Millen","doi":"10.1177/2515245919870737","DOIUrl":"https://doi.org/10.1177/2515245919870737","url":null,"abstract":"Crosby, Monin, and Richardson (2008) found that hearing an offensive remark caused subjects (N = 25) to look longer at a potentially offended person, but only if that person could hear the remark. On the basis of this result, they argued that people use social referencing to assess the offensiveness. However, in a direct replication in the Reproducibility Project: Psychology, the result for Crosby et al.’s key effect was not significant. In the current project, we tested whether the size of the social-referencing effect might be increased by a peer-reviewed and preregistered protocol manipulation in which some participants were given context to understand why the remark was potentially offensive. Three labs in Europe and the United States (N = 283) took part. The protocol manipulation did not affect the size of the social-referencing effect. However, we did replicate the original effect reported by Crosby et al., albeit with a much smaller effect size. We discuss these results in the context of ongoing debates about how replication attempts should treat statistical power and contextual sensitivity.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"353 - 365"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245919870737","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43192977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Advances in Methods and Practices in Psychological Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1