Pub Date : 2020-09-01DOI: 10.1177/2515245920917334
E. Baranski, Ernest Baskin, Sean Coary, C. Ebersole, Lacy E. Krueger, L. Lazarević, Jeremy K. Miller, Ana Orlić, Matthew R. Penner, D. Purić, S. Rife, L. Vaughn, A. Wichman, I. Žeželj
Shnabel and Nadler (2008) assessed a needs-based model of reconciliation suggesting that in conflicts, victims and perpetrators have different psychological needs that when satisfied increase the chances of reconciliation. For instance, Shnabel and Nadler found that after a conflict, perpetrators indicated that they had a need for social acceptance and were more likely to reconcile after their sense of social acceptance was restored, whereas victims indicated that they had a need for power and were more likely to reconcile after their sense of power was restored. Gilbert (2016), as a part of the Reproducibility Project: Psychology (RP:P), attempted to replicate these findings using different study materials but did not find support for the original effect. In an attempt to reconcile these discrepant findings, we conducted two new sets of replications—one using the RP:P protocol and another using modified materials meant to be more relatable to undergraduate participants. Teams from eight universities contributed to data collection (N = 2,738). We did find moderation by protocol; the focal interaction from the revised protocol, but not from the RP:P protocol, replicated the interaction in the original study. We discuss differences in, and possible explanations for, the patterns of results across protocols.
{"title":"Many Labs 5: Registered Replication of Shnabel and Nadler (2008), Study 4","authors":"E. Baranski, Ernest Baskin, Sean Coary, C. Ebersole, Lacy E. Krueger, L. Lazarević, Jeremy K. Miller, Ana Orlić, Matthew R. Penner, D. Purić, S. Rife, L. Vaughn, A. Wichman, I. Žeželj","doi":"10.1177/2515245920917334","DOIUrl":"https://doi.org/10.1177/2515245920917334","url":null,"abstract":"Shnabel and Nadler (2008) assessed a needs-based model of reconciliation suggesting that in conflicts, victims and perpetrators have different psychological needs that when satisfied increase the chances of reconciliation. For instance, Shnabel and Nadler found that after a conflict, perpetrators indicated that they had a need for social acceptance and were more likely to reconcile after their sense of social acceptance was restored, whereas victims indicated that they had a need for power and were more likely to reconcile after their sense of power was restored. Gilbert (2016), as a part of the Reproducibility Project: Psychology (RP:P), attempted to replicate these findings using different study materials but did not find support for the original effect. In an attempt to reconcile these discrepant findings, we conducted two new sets of replications—one using the RP:P protocol and another using modified materials meant to be more relatable to undergraduate participants. Teams from eight universities contributed to data collection (N = 2,738). We did find moderation by protocol; the focal interaction from the revised protocol, but not from the RP:P protocol, replicated the interaction in the original study. We discuss differences in, and possible explanations for, the patterns of results across protocols.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"405 - 417"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920917334","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46409138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/2515245920916513
H. Ijzerman, Ivan Ropovik, C. Ebersole, N. Tidwell, Łukasz Markiewicz, Tiago Jessé Souza de Lima, D. Wolf, S. Novak, W. Collins, M. Menon, Luana Elayne Cunha de Souza, P. Sawicki, L. Boucher, Michał J. Białek, Katarzyna Idzikowska, Timothy S. Razza, S. Kraus, Sophia C. Weissgerber, G. Baník, S. Kołodziej, P. Babinčák, A. Schütz, R. W. Sternglanz, Katarzyna Gawryluk, G. Sullivan, C. Day
In a test of their global-/local-processing-style model, Förster, Liberman, and Kuschel (2008) found that people assimilate a primed concept (e.g., “aggressive”) into their social judgments after a global prime (e.g., they rate a person as being more aggressive than do people in a no-prime condition) but contrast their judgment away from the primed concept after a local prime (e.g., they rate the person as being less aggressive than do people in a no prime-condition). This effect was not replicated by Reinhard (2015) in the Reproducibility Project: Psychology. However, the authors of the original study noted that the replication could not provide a test of the moderation effect because priming did not occur. They suggested that the primes might have been insufficiently applicable and the scenarios insufficiently ambiguous to produce priming. In the current replication project, we used both Reinhard’s protocol and a revised protocol that was designed to increase the likelihood of priming, to test the original authors’ suggested explanation for why Reinhard did not observe the moderation effect. Teams from nine universities contributed to this project. We first conducted a pilot study (N = 530) and successfully selected ambiguous scenarios for each site. We then pilot-tested the aggression prime at five different sites (N = 363) and found that it did not successfully produce priming. In agreement with the first author of the original report, we replaced the prime with a task that successfully primed aggression (hostility) in a pilot study by McCarthy et al. (2018). In the final replication study (N = 1,460), we did not find moderation by protocol type, and judgment patterns in both protocols were inconsistent with the effects observed in the original study. We discuss these findings and possible explanations.
{"title":"Many Labs 5: Registered Replication of Förster, Liberman, and Kuschel’s (2008) Study 1","authors":"H. Ijzerman, Ivan Ropovik, C. Ebersole, N. Tidwell, Łukasz Markiewicz, Tiago Jessé Souza de Lima, D. Wolf, S. Novak, W. Collins, M. Menon, Luana Elayne Cunha de Souza, P. Sawicki, L. Boucher, Michał J. Białek, Katarzyna Idzikowska, Timothy S. Razza, S. Kraus, Sophia C. Weissgerber, G. Baník, S. Kołodziej, P. Babinčák, A. Schütz, R. W. Sternglanz, Katarzyna Gawryluk, G. Sullivan, C. Day","doi":"10.1177/2515245920916513","DOIUrl":"https://doi.org/10.1177/2515245920916513","url":null,"abstract":"In a test of their global-/local-processing-style model, Förster, Liberman, and Kuschel (2008) found that people assimilate a primed concept (e.g., “aggressive”) into their social judgments after a global prime (e.g., they rate a person as being more aggressive than do people in a no-prime condition) but contrast their judgment away from the primed concept after a local prime (e.g., they rate the person as being less aggressive than do people in a no prime-condition). This effect was not replicated by Reinhard (2015) in the Reproducibility Project: Psychology. However, the authors of the original study noted that the replication could not provide a test of the moderation effect because priming did not occur. They suggested that the primes might have been insufficiently applicable and the scenarios insufficiently ambiguous to produce priming. In the current replication project, we used both Reinhard’s protocol and a revised protocol that was designed to increase the likelihood of priming, to test the original authors’ suggested explanation for why Reinhard did not observe the moderation effect. Teams from nine universities contributed to this project. We first conducted a pilot study (N = 530) and successfully selected ambiguous scenarios for each site. We then pilot-tested the aggression prime at five different sites (N = 363) and found that it did not successfully produce priming. In agreement with the first author of the original report, we replaced the prime with a task that successfully primed aggression (hostility) in a pilot study by McCarthy et al. (2018). In the final replication study (N = 1,460), we did not find moderation by protocol type, and judgment patterns in both protocols were inconsistent with the effects observed in the original study. We discuss these findings and possible explanations.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"366 - 376"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920916513","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45019732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/2515245919885609
C. Ebersole, L. Andrighetto, E. Casini, C. Chiorri, Anna Dalla Rosa, Filippo Domaneschi, Ian R. Ferguson, Emily Fryberger, Mauro Giacomantonio, Jon E. Grahe, Jennifer A. Joy-Gaba, Eleanor V. Langford, Austin Lee Nichols, A. Panno, Kimberly P. Parks, E. Preti, J. Richetin, M. Vianello
To rule out an alternative to their structural-fit hypothesis, Payne, Burkley, and Stokes (2008) demonstrated that correlations between implicit and explicit race attitudes were weaker when participants were put under high pressure to respond without bias than when they were placed under low pressure. This effect was replicated in Italy by Vianello (2015), although the replication effect was smaller than the original effect. In the current investigation, we examined the possibility that the source of a study’s sample moderates this effect. Teams from eight universities, four in the United States and four in Italy, replicated the original study (replication N = 1,103). Although we did detect moderation by the sample’s country, it was due to a reversal of the original effect in the United States and a lack of the original effect in Italy. We discuss this curious finding and possible explanations.
{"title":"Many Labs 5: Registered Replication of Payne, Burkley, and Stokes (2008), Study 4","authors":"C. Ebersole, L. Andrighetto, E. Casini, C. Chiorri, Anna Dalla Rosa, Filippo Domaneschi, Ian R. Ferguson, Emily Fryberger, Mauro Giacomantonio, Jon E. Grahe, Jennifer A. Joy-Gaba, Eleanor V. Langford, Austin Lee Nichols, A. Panno, Kimberly P. Parks, E. Preti, J. Richetin, M. Vianello","doi":"10.1177/2515245919885609","DOIUrl":"https://doi.org/10.1177/2515245919885609","url":null,"abstract":"To rule out an alternative to their structural-fit hypothesis, Payne, Burkley, and Stokes (2008) demonstrated that correlations between implicit and explicit race attitudes were weaker when participants were put under high pressure to respond without bias than when they were placed under low pressure. This effect was replicated in Italy by Vianello (2015), although the replication effect was smaller than the original effect. In the current investigation, we examined the possibility that the source of a study’s sample moderates this effect. Teams from eight universities, four in the United States and four in Italy, replicated the original study (replication N = 1,103). Although we did detect moderation by the sample’s country, it was due to a reversal of the original effect in the United States and a lack of the original effect in Italy. We discuss this curious finding and possible explanations.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"387 - 393"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245919885609","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46067600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/2515245920927643
Lauren Skorb, B. Aczel, Bence Bakos, Lily Feinberg, Ewa Hałasa, Mathias Kauff, Márton Kovács, Karolina Krasuska, Katarzyna Kuchno, Dylan Manfredi, Andres Montealegre, Emilian Pękala, Damian Pieńkosz, Jonathan D Ravid, K. Rentzsch, B. Szaszi, S. Schulz-Hardt, Barbara Sioma, Péter Szécsi, Attila Szuts, Orsolya Szöke, O. Christ, A. Fedor, William Jiménez-Leal, Rafał Muda, G. Nave, Janos Salamon, T. Schultze, Joshua K. Hartshorne
As part of the Many Labs 5 project, we ran a replication of van Dijk, van Kleef, Steinel, and van Beest’s (2008) study examining the effect of emotions in negotiations. They reported that when the consequences of rejection were low, subjects offered fewer chips to angry bargaining partners than to happy partners. We ran this replication under three protocols: the protocol used in the Reproducibility Project: Psychology, a revised protocol, and an online protocol. The effect averaged one ninth the size of the originally reported effect and was significant only for the revised protocol. However, the difference between the original and revised protocols was not significant.
{"title":"Many Labs 5: Replication of van Dijk, van Kleef, Steinel, and van Beest (2008)","authors":"Lauren Skorb, B. Aczel, Bence Bakos, Lily Feinberg, Ewa Hałasa, Mathias Kauff, Márton Kovács, Karolina Krasuska, Katarzyna Kuchno, Dylan Manfredi, Andres Montealegre, Emilian Pękala, Damian Pieńkosz, Jonathan D Ravid, K. Rentzsch, B. Szaszi, S. Schulz-Hardt, Barbara Sioma, Péter Szécsi, Attila Szuts, Orsolya Szöke, O. Christ, A. Fedor, William Jiménez-Leal, Rafał Muda, G. Nave, Janos Salamon, T. Schultze, Joshua K. Hartshorne","doi":"10.1177/2515245920927643","DOIUrl":"https://doi.org/10.1177/2515245920927643","url":null,"abstract":"As part of the Many Labs 5 project, we ran a replication of van Dijk, van Kleef, Steinel, and van Beest’s (2008) study examining the effect of emotions in negotiations. They reported that when the consequences of rejection were low, subjects offered fewer chips to angry bargaining partners than to happy partners. We ran this replication under three protocols: the protocol used in the Reproducibility Project: Psychology, a revised protocol, and an online protocol. The effect averaged one ninth the size of the originally reported effect and was significant only for the revised protocol. However, the difference between the original and revised protocols was not significant.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"418 - 428"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920927643","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43925880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/2515245920917931
N. Buttrick, B. Aczel, L. F. Aeschbach, Bence Bakos, Florian Brühlmann, Heather M. Claypool, J. Hüffmeier, Márton Kovács, Kurt Schuepfer, Péter Szécsi, Attila Szuts, Orsolya Szöke, M. Thomae, Ann-Kathrin Torka, R. J. Walker, Michael Wood
Does convincing people that free will is an illusion reduce their sense of personal responsibility? Vohs and Schooler (2008) found that participants reading from a passage “debunking” free will cheated more on experimental tasks than did those reading from a control passage, an effect mediated by decreased belief in free will. However, this finding was not replicated by Embley, Johnson, and Giner-Sorolla (2015), who found that reading arguments against free will had no effect on cheating in their sample. The present study investigated whether hard-to-understand arguments against free will and a low-reliability measure of free-will beliefs account for Embley et al.’s failure to replicate Vohs and Schooler’s results. Participants (N = 621) were randomly assigned to participate in either a close replication of Vohs and Schooler’s Experiment 1 based on the materials of Embley et al. or a revised protocol, which used an easier-to-understand free-will-belief manipulation and an improved instrument to measure free will. We found that the revisions did not matter. Although the revised measure of belief in free will had better reliability than the original measure, an analysis of the data from the two protocols combined indicated that free-will beliefs were unchanged by the manipulations, d = 0.064, 95% confidence interval = [−0.087, 0.22], and in the focal test, there were no differences in cheating behavior between conditions, d = 0.076, 95% CI = [−0.082, 0.22]. We found that expressed free-will beliefs did not mediate the link between the free-will-belief manipulation and cheating, and in exploratory follow-up analyses, we found that participants expressing lower beliefs in free will were not more likely to cheat in our task.
{"title":"Many Labs 5: Registered Replication of Vohs and Schooler (2008), Experiment 1","authors":"N. Buttrick, B. Aczel, L. F. Aeschbach, Bence Bakos, Florian Brühlmann, Heather M. Claypool, J. Hüffmeier, Márton Kovács, Kurt Schuepfer, Péter Szécsi, Attila Szuts, Orsolya Szöke, M. Thomae, Ann-Kathrin Torka, R. J. Walker, Michael Wood","doi":"10.1177/2515245920917931","DOIUrl":"https://doi.org/10.1177/2515245920917931","url":null,"abstract":"Does convincing people that free will is an illusion reduce their sense of personal responsibility? Vohs and Schooler (2008) found that participants reading from a passage “debunking” free will cheated more on experimental tasks than did those reading from a control passage, an effect mediated by decreased belief in free will. However, this finding was not replicated by Embley, Johnson, and Giner-Sorolla (2015), who found that reading arguments against free will had no effect on cheating in their sample. The present study investigated whether hard-to-understand arguments against free will and a low-reliability measure of free-will beliefs account for Embley et al.’s failure to replicate Vohs and Schooler’s results. Participants (N = 621) were randomly assigned to participate in either a close replication of Vohs and Schooler’s Experiment 1 based on the materials of Embley et al. or a revised protocol, which used an easier-to-understand free-will-belief manipulation and an improved instrument to measure free will. We found that the revisions did not matter. Although the revised measure of belief in free will had better reliability than the original measure, an analysis of the data from the two protocols combined indicated that free-will beliefs were unchanged by the manipulations, d = 0.064, 95% confidence interval = [−0.087, 0.22], and in the focal test, there were no differences in cheating behavior between conditions, d = 0.076, 95% CI = [−0.082, 0.22]. We found that expressed free-will beliefs did not mediate the link between the free-will-belief manipulation and cheating, and in exploratory follow-up analyses, we found that participants expressing lower beliefs in free will were not more likely to cheat in our task.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"429 - 438"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920917931","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41728397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/2515245918785165
Maya B. Mathur, Diane-Jo Bart-Plange, B. Aczel, Michael H Bernstein, Antonia M. Ciunci, C. Ebersole, Filipe Falcão, Kayla Ashbaugh, Rias A. Hilliard, Alan Jern, Danielle J Kellier, G. Kessinger, Vanessa S. Kolb, Márton Kovács, C. Lage, Eleanor V. Langford, S. Lins, Dylan Manfredi, Venus Meyet, D. Moore, G. Nave, Christian Nunnally, Anna Palinkas, Kimberly P. Parks, S. Pessers, Tiago Ramos, Kaylis Hase Rudy, Janos Salamon, Rachel L. Shubella, Rúben Silva, S. Steegen, L. Stein, B. Szaszi, Péter Szécsi, F. Tuerlinckx, W. Vanpaemel, M. Vlachou, B. Wiggins, David Zealley, Mark Zrubka, Michael C. Frank
Risen and Gilovich (2008) found that subjects believed that “tempting fate” would be punished with ironic bad outcomes (a main effect), and that this effect was magnified when subjects were under cognitive load (an interaction). A previous replication study (Frank & Mathur, 2016) that used an online implementation of the protocol on Amazon Mechanical Turk failed to replicate both the main effect and the interaction. Before this replication was run, the authors of the original study expressed concern that the cognitive-load manipulation may be less effective when implemented online than when implemented in the lab and that subjects recruited online may also respond differently to the specific experimental scenario chosen for the replication. A later, large replication project, Many Labs 2 (Klein et al. 2018), replicated the main effect (though the effect size was smaller than in the original study), but the interaction was not assessed. Attempting to replicate the interaction while addressing the original authors’ concerns regarding the protocol for the first replication study, we developed a new protocol in collaboration with the original authors. We used four university sites (N = 754) chosen for similarity to the site of the original study to conduct a high-powered, preregistered replication focused primarily on the interaction effect. Results from these sites did not support the interaction or the main effect and were comparable to results obtained at six additional universities that were less similar to the original site. Post hoc analyses did not provide strong evidence for statistical inconsistency between the original study’s estimates and our estimates; that is, the original study’s results would not have been extremely unlikely in the estimated distribution of population effects in our sites. We also collected data from a new Mechanical Turk sample under the first replication study’s protocol, and results were not meaningfully different from those obtained with the new protocol at universities similar to the original site. Secondary analyses failed to support proposed substantive mechanisms for the failure to replicate.
{"title":"Many Labs 5: Registered Multisite Replication of the Tempting-Fate Effects in Risen and Gilovich (2008)","authors":"Maya B. Mathur, Diane-Jo Bart-Plange, B. Aczel, Michael H Bernstein, Antonia M. Ciunci, C. Ebersole, Filipe Falcão, Kayla Ashbaugh, Rias A. Hilliard, Alan Jern, Danielle J Kellier, G. Kessinger, Vanessa S. Kolb, Márton Kovács, C. Lage, Eleanor V. Langford, S. Lins, Dylan Manfredi, Venus Meyet, D. Moore, G. Nave, Christian Nunnally, Anna Palinkas, Kimberly P. Parks, S. Pessers, Tiago Ramos, Kaylis Hase Rudy, Janos Salamon, Rachel L. Shubella, Rúben Silva, S. Steegen, L. Stein, B. Szaszi, Péter Szécsi, F. Tuerlinckx, W. Vanpaemel, M. Vlachou, B. Wiggins, David Zealley, Mark Zrubka, Michael C. Frank","doi":"10.1177/2515245918785165","DOIUrl":"https://doi.org/10.1177/2515245918785165","url":null,"abstract":"Risen and Gilovich (2008) found that subjects believed that “tempting fate” would be punished with ironic bad outcomes (a main effect), and that this effect was magnified when subjects were under cognitive load (an interaction). A previous replication study (Frank & Mathur, 2016) that used an online implementation of the protocol on Amazon Mechanical Turk failed to replicate both the main effect and the interaction. Before this replication was run, the authors of the original study expressed concern that the cognitive-load manipulation may be less effective when implemented online than when implemented in the lab and that subjects recruited online may also respond differently to the specific experimental scenario chosen for the replication. A later, large replication project, Many Labs 2 (Klein et al. 2018), replicated the main effect (though the effect size was smaller than in the original study), but the interaction was not assessed. Attempting to replicate the interaction while addressing the original authors’ concerns regarding the protocol for the first replication study, we developed a new protocol in collaboration with the original authors. We used four university sites (N = 754) chosen for similarity to the site of the original study to conduct a high-powered, preregistered replication focused primarily on the interaction effect. Results from these sites did not support the interaction or the main effect and were comparable to results obtained at six additional universities that were less similar to the original site. Post hoc analyses did not provide strong evidence for statistical inconsistency between the original study’s estimates and our estimates; that is, the original study’s results would not have been extremely unlikely in the estimated distribution of population effects in our sites. We also collected data from a new Mechanical Turk sample under the first replication study’s protocol, and results were not meaningfully different from those obtained with the new protocol at universities similar to the original site. Secondary analyses failed to support proposed substantive mechanisms for the failure to replicate.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"394 - 404"},"PeriodicalIF":13.6,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245918785165","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47139386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-21DOI: 10.1177/2515245920919667
S. Hoogeveen, A. Sarafoglou, E. Wagenmakers
Large-scale collaborative projects recently demonstrated that several key findings from the social-science literature could not be replicated successfully. Here, we assess the extent to which a finding’s replication success relates to its intuitive plausibility. Each of 27 high-profile social-science findings was evaluated by 233 people without a Ph.D. in psychology. Results showed that these laypeople predicted replication success with above-chance accuracy (i.e., 59%). In addition, when participants were informed about the strength of evidence from the original studies, this boosted their prediction performance to 67%. We discuss the prediction patterns and apply signal detection theory to disentangle detection ability from response bias. Our study suggests that laypeople’s predictions contain useful information for assessing the probability that a given finding will be replicated successfully.
{"title":"Laypeople Can Predict Which Social-Science Studies Will Be Replicated Successfully","authors":"S. Hoogeveen, A. Sarafoglou, E. Wagenmakers","doi":"10.1177/2515245920919667","DOIUrl":"https://doi.org/10.1177/2515245920919667","url":null,"abstract":"Large-scale collaborative projects recently demonstrated that several key findings from the social-science literature could not be replicated successfully. Here, we assess the extent to which a finding’s replication success relates to its intuitive plausibility. Each of 27 high-profile social-science findings was evaluated by 233 people without a Ph.D. in psychology. Results showed that these laypeople predicted replication success with above-chance accuracy (i.e., 59%). In addition, when participants were informed about the strength of evidence from the original studies, this boosted their prediction performance to 67%. We discuss the prediction patterns and apply signal detection theory to disentangle detection ability from response bias. Our study suggests that laypeople’s predictions contain useful information for assessing the probability that a given finding will be replicated successfully.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"267 - 285"},"PeriodicalIF":13.6,"publicationDate":"2020-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920919667","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44021324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-21DOI: 10.1177/25152459211019948
Hannah Moshontz, Grace Binion, H. Walton, B. T. Brown, M. Syed
Posting preprints online allows psychological scientists to get feedback, speed dissemination, and ensure public access to their work. This guide is designed to help psychological scientists post preprints and manage them across the publication pipeline. We review terminology, provide a historical and legal overview of preprints, and give guidance on posting and managing preprints before, during, or after the peer-review process to achieve different aims (e.g., get feedback, speed dissemination, achieve open access). We offer concrete recommendations to authors, such as post preprints that are complete and carefully proofread; post preprints in a dedicated preprint server that assigns DOIs, provides editable metadata, is indexed by GoogleScholar, supports review and endorsements, and supports version control; include a draft date and information about the paper’s status on the cover page; license preprints with CC BY licenses that permit public use with attribution; and keep preprints up to date after major revisions. Although our focus is on preprints (unpublished versions of a work), we also offer information relevant to postprints (author-formatted, post-peer-review versions of a work) and work that will not otherwise be published (e.g., theses and dissertations).
{"title":"A Guide to Posting and Managing Preprints","authors":"Hannah Moshontz, Grace Binion, H. Walton, B. T. Brown, M. Syed","doi":"10.1177/25152459211019948","DOIUrl":"https://doi.org/10.1177/25152459211019948","url":null,"abstract":"Posting preprints online allows psychological scientists to get feedback, speed dissemination, and ensure public access to their work. This guide is designed to help psychological scientists post preprints and manage them across the publication pipeline. We review terminology, provide a historical and legal overview of preprints, and give guidance on posting and managing preprints before, during, or after the peer-review process to achieve different aims (e.g., get feedback, speed dissemination, achieve open access). We offer concrete recommendations to authors, such as post preprints that are complete and carefully proofread; post preprints in a dedicated preprint server that assigns DOIs, provides editable metadata, is indexed by GoogleScholar, supports review and endorsements, and supports version control; include a draft date and information about the paper’s status on the cover page; license preprints with CC BY licenses that permit public use with attribution; and keep preprints up to date after major revisions. Although our focus is on preprints (unpublished versions of a work), we also offer information relevant to postprints (author-formatted, post-peer-review versions of a work) and work that will not otherwise be published (e.g., theses and dissertations).","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2020-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/25152459211019948","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44442586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-30DOI: 10.1177/2515245920972624
D. van Ravenzwaaij, Alexander Etz
When social scientists wish to learn about an empirical phenomenon, they perform an experiment. When they wish to learn about a complex numerical phenomenon, they can perform a simulation study. The goal of this Tutorial is twofold. First, it introduces how to set up a simulation study using the relatively simple example of simulating from the prior. Second, it demonstrates how simulation can be used to learn about the Jeffreys-Zellner-Siow (JZS) Bayes factor, a currently popular implementation of the Bayes factor employed in the BayesFactor R package and freeware program JASP. Many technical expositions on Bayes factors exist, but these may be somewhat inaccessible to researchers who are not specialized in statistics. In a step-by-step approach, this Tutorial shows how a simple simulation script can be used to approximate the calculation of the Bayes factor. We explain how a researcher can write such a sampler to approximate Bayes factors in a few lines of code, what the logic is behind the Savage-Dickey method used to visualize Bayes factors, and what the practical differences are for different choices of the prior distribution used to calculate Bayes factors.
{"title":"Simulation Studies as a Tool to Understand Bayes Factors","authors":"D. van Ravenzwaaij, Alexander Etz","doi":"10.1177/2515245920972624","DOIUrl":"https://doi.org/10.1177/2515245920972624","url":null,"abstract":"When social scientists wish to learn about an empirical phenomenon, they perform an experiment. When they wish to learn about a complex numerical phenomenon, they can perform a simulation study. The goal of this Tutorial is twofold. First, it introduces how to set up a simulation study using the relatively simple example of simulating from the prior. Second, it demonstrates how simulation can be used to learn about the Jeffreys-Zellner-Siow (JZS) Bayes factor, a currently popular implementation of the Bayes factor employed in the BayesFactor R package and freeware program JASP. Many technical expositions on Bayes factors exist, but these may be somewhat inaccessible to researchers who are not specialized in statistics. In a step-by-step approach, this Tutorial shows how a simple simulation script can be used to approximate the calculation of the Bayes factor. We explain how a researcher can write such a sampler to approximate Bayes factors in a few lines of code, what the logic is behind the Savage-Dickey method used to visualize Bayes factors, and what the practical differences are for different choices of the prior distribution used to calculate Bayes factors.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2020-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920972624","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48453971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-07DOI: 10.1177/2515245920913019
B. Pálfi, Z. Dienes
Psychologists are often interested in whether an independent variable has a different effect in condition A than in condition B. To test such a question, one needs to directly compare the effect of that variable in the two conditions (i.e., test the interaction). Yet many researchers tend to stop when they find a significant test in one condition and a nonsignificant test in the other condition, deeming this as sufficient evidence for a difference between the two conditions. In this Tutorial, we aim to raise awareness of this inferential mistake when Bayes factors are used with conventional cutoffs to draw conclusions. For instance, some researchers might falsely conclude that there must be good-enough evidence for the interaction if they find good-enough Bayesian evidence for the alternative hypothesis, H1, in condition A and good-enough Bayesian evidence for the null hypothesis, H0, in condition B. The case study we introduce highlights that ignoring the test of the interaction can lead to unjustified conclusions and demonstrates that the principle that any assertion about the existence of an interaction necessitates the direct comparison of the conditions is as true for Bayesian as it is for frequentist statistics. We provide an R script of the analyses of the case study and a Shiny app that can be used with a 2 × 2 design to develop intuitions on this issue, and we introduce a rule of thumb with which one can estimate the sample size one might need to have a well-powered design.
{"title":"Why Bayesian “Evidence for H1” in One Condition and Bayesian “Evidence for H0” in Another Condition Does Not Mean Good-Enough Bayesian Evidence for a Difference Between the Conditions","authors":"B. Pálfi, Z. Dienes","doi":"10.1177/2515245920913019","DOIUrl":"https://doi.org/10.1177/2515245920913019","url":null,"abstract":"Psychologists are often interested in whether an independent variable has a different effect in condition A than in condition B. To test such a question, one needs to directly compare the effect of that variable in the two conditions (i.e., test the interaction). Yet many researchers tend to stop when they find a significant test in one condition and a nonsignificant test in the other condition, deeming this as sufficient evidence for a difference between the two conditions. In this Tutorial, we aim to raise awareness of this inferential mistake when Bayes factors are used with conventional cutoffs to draw conclusions. For instance, some researchers might falsely conclude that there must be good-enough evidence for the interaction if they find good-enough Bayesian evidence for the alternative hypothesis, H1, in condition A and good-enough Bayesian evidence for the null hypothesis, H0, in condition B. The case study we introduce highlights that ignoring the test of the interaction can lead to unjustified conclusions and demonstrates that the principle that any assertion about the existence of an interaction necessitates the direct comparison of the conditions is as true for Bayesian as it is for frequentist statistics. We provide an R script of the analyses of the case study and a Shiny app that can be used with a 2 × 2 design to develop intuitions on this issue, and we introduce a rule of thumb with which one can estimate the sample size one might need to have a well-powered design.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"3 1","pages":"300 - 308"},"PeriodicalIF":13.6,"publicationDate":"2020-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920913019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47817136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}