Pub Date : 2022-04-01DOI: 10.1177/25152459221074654
E. Nordmann, P. McAleer, Wilhelmiina Toivo, H. Paterson, L. DeBruine
In addition to benefiting reproducibility and transparency, one of the advantages of using R is that researchers have a much larger range of fully customizable data visualizations options than are typically available in point-and-click software because of the open-source nature of R. These visualization options not only look attractive but also can increase transparency about the distribution of the underlying data rather than relying on commonly used visualizations of aggregations, such as bar charts of means. In this tutorial, we provide a practical introduction to data visualization using R specifically aimed at researchers who have little to no prior experience of using R. First, we detail the rationale for using R for data visualization and introduce the “grammar of graphics” that underlies data visualization using the ggplot package. The tutorial then walks the reader through how to replicate plots that are commonly available in point-and-click software, such as histograms and box plots, and shows how the code for these “basic” plots can be easily extended to less commonly available options, such as violin box plots. The data set and code used in this tutorial and an interactive version with activity solutions, additional resources, and advanced plotting options are available at https://osf.io/bj83f/.
{"title":"Data Visualization Using R for Researchers Who Do Not Use R","authors":"E. Nordmann, P. McAleer, Wilhelmiina Toivo, H. Paterson, L. DeBruine","doi":"10.1177/25152459221074654","DOIUrl":"https://doi.org/10.1177/25152459221074654","url":null,"abstract":"In addition to benefiting reproducibility and transparency, one of the advantages of using R is that researchers have a much larger range of fully customizable data visualizations options than are typically available in point-and-click software because of the open-source nature of R. These visualization options not only look attractive but also can increase transparency about the distribution of the underlying data rather than relying on commonly used visualizations of aggregations, such as bar charts of means. In this tutorial, we provide a practical introduction to data visualization using R specifically aimed at researchers who have little to no prior experience of using R. First, we detail the rationale for using R for data visualization and introduce the “grammar of graphics” that underlies data visualization using the ggplot package. The tutorial then walks the reader through how to replicate plots that are commonly available in point-and-click software, such as histograms and box plots, and shows how the code for these “basic” plots can be easily extended to less commonly available options, such as violin box plots. The data set and code used in this tutorial and an interactive version with activity solutions, additional resources, and advanced plotting options are available at https://osf.io/bj83f/.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49445523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-01DOI: 10.1177/25152459221082680
Sandrine R. Müller, J. Bayer, M. Ross, Jerry Mount, Clemens Stachl, Gabriella M. Harari, Yung-Ju Chang, Huyen T. K. Le
The ubiquity of location-data-enabled devices provides novel avenues for psychology researchers to incorporate spatial analytics into their studies. Spatial analytics use global positioning system (GPS) data to assess and understand mobility behavior (e.g., locations visited, movement patterns). In this tutorial, we provide a practical guide to analyzing GPS data in R and introduce researchers to key procedures and resources for conducting spatial analytics. We show readers how to clean GPS data, compute mobility features (e.g., time spent at home, number of unique places visited), and visualize locations and movement patterns. In addition, we discuss the challenges of ensuring participant privacy and interpreting the psychological implications of mobility behaviors. The tutorial is accompanied by an R Markdown script and a simulated GPS data set made available on the OSF.
{"title":"Analyzing GPS Data for Psychological Research: A Tutorial","authors":"Sandrine R. Müller, J. Bayer, M. Ross, Jerry Mount, Clemens Stachl, Gabriella M. Harari, Yung-Ju Chang, Huyen T. K. Le","doi":"10.1177/25152459221082680","DOIUrl":"https://doi.org/10.1177/25152459221082680","url":null,"abstract":"The ubiquity of location-data-enabled devices provides novel avenues for psychology researchers to incorporate spatial analytics into their studies. Spatial analytics use global positioning system (GPS) data to assess and understand mobility behavior (e.g., locations visited, movement patterns). In this tutorial, we provide a practical guide to analyzing GPS data in R and introduce researchers to key procedures and resources for conducting spatial analytics. We show readers how to clean GPS data, compute mobility features (e.g., time spent at home, number of unique places visited), and visualize locations and movement patterns. In addition, we discuss the challenges of ensuring participant privacy and interpreting the psychological implications of mobility behaviors. The tutorial is accompanied by an R Markdown script and a simulated GPS data set made available on the OSF.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48228048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-21DOI: 10.1177/25152459221128319
A. Sarafoglou, S. Hoogeveen, E. Wagenmakers
In psychology, preregistration is the most widely used method to ensure the confirmatory status of analyses. However, the method has disadvantages: Not only is it perceived as effortful and time-consuming, but reasonable deviations from the analysis plan demote the status of the study to exploratory. An alternative to preregistration is analysis blinding, in which researchers develop their analysis on an altered version of the data. In this experimental study, we compare the reported efficiency and convenience of the two methods in the context of the Many-Analysts Religion Project. In this project, 120 teams answered the same research questions on the same data set, either preregistering their analysis (n = 61) or using analysis blinding (n = 59). Our results provide strong evidence (Bayes factor [BF] = 71.40) for the hypothesis that analysis blinding leads to fewer deviations from the analysis plan, and if teams deviated, they did so on fewer aspects. Contrary to our hypothesis, we found strong evidence (BF = 13.19) that both methods required approximately the same amount of time. Finally, we found no and moderate evidence on whether analysis blinding was perceived as less effortful and frustrating, respectively. We conclude that analysis blinding does not mean less work, but researchers can still benefit from the method because they can plan more appropriate analyses from which they deviate less frequently.
{"title":"Comparing Analysis Blinding With Preregistration in the Many-Analysts Religion Project","authors":"A. Sarafoglou, S. Hoogeveen, E. Wagenmakers","doi":"10.1177/25152459221128319","DOIUrl":"https://doi.org/10.1177/25152459221128319","url":null,"abstract":"In psychology, preregistration is the most widely used method to ensure the confirmatory status of analyses. However, the method has disadvantages: Not only is it perceived as effortful and time-consuming, but reasonable deviations from the analysis plan demote the status of the study to exploratory. An alternative to preregistration is analysis blinding, in which researchers develop their analysis on an altered version of the data. In this experimental study, we compare the reported efficiency and convenience of the two methods in the context of the Many-Analysts Religion Project. In this project, 120 teams answered the same research questions on the same data set, either preregistering their analysis (n = 61) or using analysis blinding (n = 59). Our results provide strong evidence (Bayes factor [BF] = 71.40) for the hypothesis that analysis blinding leads to fewer deviations from the analysis plan, and if teams deviated, they did so on fewer aspects. Contrary to our hypothesis, we found strong evidence (BF = 13.19) that both methods required approximately the same amount of time. Finally, we found no and moderate evidence on whether analysis blinding was perceived as less effortful and frustrating, respectively. We conclude that analysis blinding does not mean less work, but researchers can still benefit from the method because they can plan more appropriate analyses from which they deviate less frequently.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"6 1","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49132518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1177/25152459221074653
Michel Regenwetter, M. Robinson, Cihang Wang
Scholars heavily rely on theoretical scope as a tool to challenge existing theory. We advocate that scientific discovery could be accelerated if far more effort were invested into also overtly specifying and painstakingly delineating the intended purview of any proposed new theory at the time of its inception. As a case study, we consider Tversky and Kahneman (1992). They motivated their Nobel-Prize-winning cumulative prospect theory with evidence that in each of two studies, roughly half of the participants violated independence, a property required by expected utility theory (EUT). Yet even at the time of inception, new theories may reveal signs of their own limited scope. For example, we show that Tversky and Kahneman’s findings in their own test of loss aversion provide evidence that at least half of their participants violated their theory, in turn, in that study. We highlight a combination of conflicting findings in the original article that make it ambiguous to evaluate both cumulative prospect theory’s scope and its parsimony on the authors’ own evidence. The Tversky and Kahneman article is illustrative of a social and behavioral research culture in which theoretical scope plays an extremely asymmetric role: to call existing theory into question and motivate surrogate proposals.
{"title":"Four Internal Inconsistencies in Tversky and Kahneman’s (1992) Cumulative Prospect Theory Article: A Case Study in Ambiguous Theoretical Scope and Ambiguous Parsimony","authors":"Michel Regenwetter, M. Robinson, Cihang Wang","doi":"10.1177/25152459221074653","DOIUrl":"https://doi.org/10.1177/25152459221074653","url":null,"abstract":"Scholars heavily rely on theoretical scope as a tool to challenge existing theory. We advocate that scientific discovery could be accelerated if far more effort were invested into also overtly specifying and painstakingly delineating the intended purview of any proposed new theory at the time of its inception. As a case study, we consider Tversky and Kahneman (1992). They motivated their Nobel-Prize-winning cumulative prospect theory with evidence that in each of two studies, roughly half of the participants violated independence, a property required by expected utility theory (EUT). Yet even at the time of inception, new theories may reveal signs of their own limited scope. For example, we show that Tversky and Kahneman’s findings in their own test of loss aversion provide evidence that at least half of their participants violated their theory, in turn, in that study. We highlight a combination of conflicting findings in the original article that make it ambiguous to evaluate both cumulative prospect theory’s scope and its parsimony on the authors’ own evidence. The Tversky and Kahneman article is illustrative of a social and behavioral research culture in which theoretical scope plays an extremely asymmetric role: to call existing theory into question and motivate surrogate proposals.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48069815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1177/25152459211070573
Z Lin, Zhe Yang, Chengzhi Feng, Yang Zhang
Psychtoolbox is among the most popular open-source software packages for stimulus presentation and response collection. It provides flexibility and power in the choice of stimuli and responses, in addition to precision in control and timing. However, Psychtoolbox requires coding in MATLAB (or its equivalent, e.g., Octave). Scripting is challenging to learn and can lead to timing inaccuracies unwittingly. It can also be time-consuming and error prone even for experienced users. We have developed the first general-purpose graphical experiment builder for Psychtoolbox, called PsyBuilder, for both new and experienced users. The builder allows users to graphically implement sophisticated experimental tasks through intuitive drag and drop without the need to script. The output codes have built-in optimized timing precision and come with detailed comments to facilitate customization. Because users can see exactly how the code changes in response to modifications in the graphical interface, PsyBuilder can also bolster the understanding of programming in ways that were not previously possible. In this tutorial, we first describe its interface, then walk the reader through the graphical building process using a concrete experiment, and finally address important issues from the perspective of potential adopters.
{"title":"PsyBuilder: An Open-Source, Cross-Platform Graphical Experiment Builder for Psychtoolbox With Built-In Performance Optimization","authors":"Z Lin, Zhe Yang, Chengzhi Feng, Yang Zhang","doi":"10.1177/25152459211070573","DOIUrl":"https://doi.org/10.1177/25152459211070573","url":null,"abstract":"Psychtoolbox is among the most popular open-source software packages for stimulus presentation and response collection. It provides flexibility and power in the choice of stimuli and responses, in addition to precision in control and timing. However, Psychtoolbox requires coding in MATLAB (or its equivalent, e.g., Octave). Scripting is challenging to learn and can lead to timing inaccuracies unwittingly. It can also be time-consuming and error prone even for experienced users. We have developed the first general-purpose graphical experiment builder for Psychtoolbox, called PsyBuilder, for both new and experienced users. The builder allows users to graphically implement sophisticated experimental tasks through intuitive drag and drop without the need to script. The output codes have built-in optimized timing precision and come with detailed comments to facilitate customization. Because users can see exactly how the code changes in response to modifications in the graphical interface, PsyBuilder can also bolster the understanding of programming in ways that were not previously possible. In this tutorial, we first describe its interface, then walk the reader through the graphical building process using a concrete experiment, and finally address important issues from the perspective of potential adopters.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45824282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1177/25152459211054059
Márton Kovács, D. van Ravenzwaaij, Rink Hoekstra, B. Aczel
Planning sample size often requires researchers to identify a statistical technique and to make several choices during their calculations. Currently, there is a lack of clear guidelines for researchers to find and use the applicable procedure. In the present tutorial, we introduce a web app and R package that offer nine different procedures to determine and justify the sample size for independent two-group study designs. The application highlights the most important decision points for each procedure and suggests example justifications for them. The resulting sample-size report can serve as a template for preregistrations and manuscripts.
{"title":"SampleSizePlanner: A Tool to Estimate and Justify Sample Size for Two-Group Studies","authors":"Márton Kovács, D. van Ravenzwaaij, Rink Hoekstra, B. Aczel","doi":"10.1177/25152459211054059","DOIUrl":"https://doi.org/10.1177/25152459211054059","url":null,"abstract":"Planning sample size often requires researchers to identify a statistical technique and to make several choices during their calculations. Currently, there is a lack of clear guidelines for researchers to find and use the applicable procedure. In the present tutorial, we introduce a web app and R package that offer nine different procedures to determine and justify the sample size for independent two-group study designs. The application highlights the most important decision points for each procedure and suggests example justifications for them. The resulting sample-size report can serve as a template for preregistrations and manuscripts.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47806351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1177/25152459211070559
Tobias Wingen, J. Berkessel, S. Dohle
A growing number of psychological research findings are initially published as preprints. Preprints are not peer reviewed and thus did not undergo the established scientific quality-control process. Many researchers hence worry that these preprints reach nonscientists, such as practitioners, journalists, and policymakers, who might be unable to differentiate them from the peer-reviewed literature. Across five studies in Germany and the United States, we investigated whether this concern is warranted and whether this problem can be solved by providing nonscientists with a brief explanation of preprints and the peer-review process. Studies 1 and 2 showed that without an explanation, nonscientists perceive research findings published as preprints as equally credible as findings published as peer-reviewed articles. However, an explanation of the peer-review process reduces the credibility of preprints (Studies 3 and 4). In Study 5, we developed and tested a shortened version of this explanation, which we recommend adding to preprints. This explanation again allowed nonscientists to differentiate between preprints and the peer-reviewed literature. In sum, our research demonstrates that even a short explanation of the concept of preprints and their lack of peer review allows nonscientists who evaluate scientific findings to adjust their credibility perception accordingly. This would allow harvesting the benefits of preprints, such as faster and more accessible science communication, while reducing concerns about public overconfidence in the presented findings.
{"title":"Caution, Preprint! Brief Explanations Allow Nonscientists to Differentiate Between Preprints and Peer-Reviewed Journal Articles","authors":"Tobias Wingen, J. Berkessel, S. Dohle","doi":"10.1177/25152459211070559","DOIUrl":"https://doi.org/10.1177/25152459211070559","url":null,"abstract":"A growing number of psychological research findings are initially published as preprints. Preprints are not peer reviewed and thus did not undergo the established scientific quality-control process. Many researchers hence worry that these preprints reach nonscientists, such as practitioners, journalists, and policymakers, who might be unable to differentiate them from the peer-reviewed literature. Across five studies in Germany and the United States, we investigated whether this concern is warranted and whether this problem can be solved by providing nonscientists with a brief explanation of preprints and the peer-review process. Studies 1 and 2 showed that without an explanation, nonscientists perceive research findings published as preprints as equally credible as findings published as peer-reviewed articles. However, an explanation of the peer-review process reduces the credibility of preprints (Studies 3 and 4). In Study 5, we developed and tested a shortened version of this explanation, which we recommend adding to preprints. This explanation again allowed nonscientists to differentiate between preprints and the peer-reviewed literature. In sum, our research demonstrates that even a short explanation of the concept of preprints and their lack of peer review allows nonscientists who evaluate scientific findings to adjust their credibility perception accordingly. This would allow harvesting the benefits of preprints, such as faster and more accessible science communication, while reducing concerns about public overconfidence in the presented findings.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47237782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-11DOI: 10.1177/25152459221140842
J. Rohrer, K. Murayama
In psychological science, researchers often pay particular attention to the distinction between within- and between-persons relationships in longitudinal data analysis. Here, we aim to clarify the relationship between the within- and between-persons distinction and causal inference and show that the distinction is informative but does not play a decisive role in causal inference. Our main points are threefold. First, within-persons data are not necessary for causal inference; for example, between-persons experiments can inform about (average) causal effects. Second, within-persons data are not sufficient for causal inference; for example, time-varying confounders can lead to spurious within-persons associations. Finally, despite not being sufficient, within-persons data can be tremendously helpful for causal inference. We provide pointers to help readers navigate the more technical literature on longitudinal models and conclude with a call for more conceptual clarity: Instead of letting statistical models dictate which substantive questions researchers ask, researchers should start with well-defined theoretical estimands, which in turn determine both study design and data analysis.
{"title":"These Are Not the Effects You Are Looking for: Causality and the Within-/Between-Persons Distinction in Longitudinal Data Analysis","authors":"J. Rohrer, K. Murayama","doi":"10.1177/25152459221140842","DOIUrl":"https://doi.org/10.1177/25152459221140842","url":null,"abstract":"In psychological science, researchers often pay particular attention to the distinction between within- and between-persons relationships in longitudinal data analysis. Here, we aim to clarify the relationship between the within- and between-persons distinction and causal inference and show that the distinction is informative but does not play a decisive role in causal inference. Our main points are threefold. First, within-persons data are not necessary for causal inference; for example, between-persons experiments can inform about (average) causal effects. Second, within-persons data are not sufficient for causal inference; for example, time-varying confounders can lead to spurious within-persons associations. Finally, despite not being sufficient, within-persons data can be tremendously helpful for causal inference. We provide pointers to help readers navigate the more technical literature on longitudinal models and conclude with a call for more conceptual clarity: Instead of letting statistical models dictate which substantive questions researchers ask, researchers should start with well-defined theoretical estimands, which in turn determine both study design and data analysis.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"6 1","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44471875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1177/25152459211045334
Eric Hehman, Sally Y. Xie
Methods in data visualization have rapidly advanced over the past decade. Although social scientists regularly need to visualize the results of their analyses, they receive little training in how to best design their visualizations. This tutorial is for individuals whose goal is to communicate patterns in data as clearly as possible to other consumers of science and is designed to be accessible to both experienced and relatively new users of R and ggplot2. In this article, we assume some basic statistical and visualization knowledge and focus on how to visualize rather than what to visualize. We distill the science and wisdom of data-visualization expertise from books, blogs, and online forum discussion threads into recommendations for social scientists looking to convey their results to other scientists. Overarching design philosophies and color decisions are discussed before giving specific examples of code in R for visualizing central tendencies, proportions, and relationships between variables.
{"title":"Doing Better Data Visualization","authors":"Eric Hehman, Sally Y. Xie","doi":"10.1177/25152459211045334","DOIUrl":"https://doi.org/10.1177/25152459211045334","url":null,"abstract":"Methods in data visualization have rapidly advanced over the past decade. Although social scientists regularly need to visualize the results of their analyses, they receive little training in how to best design their visualizations. This tutorial is for individuals whose goal is to communicate patterns in data as clearly as possible to other consumers of science and is designed to be accessible to both experienced and relatively new users of R and ggplot2. In this article, we assume some basic statistical and visualization knowledge and focus on how to visualize rather than what to visualize. We distill the science and wisdom of data-visualization expertise from books, blogs, and online forum discussion threads into recommendations for social scientists looking to convey their results to other scientists. Overarching design philosophies and color decisions are discussed before giving specific examples of code in R for visualizing central tendencies, proportions, and relationships between variables.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48972354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1177/25152459211047227
John G. Bullock, D. Green
Scholars routinely test mediation claims by using some form of measurement-of-mediation analysis whereby outcomes are regressed on treatments and mediators to assess direct and indirect effects. Indeed, it is rare for an issue of any leading journal of social or personality psychology not to include such an analysis. Statisticians have for decades criticized this method on the grounds that it relies on implausible assumptions, but these criticisms have been largely ignored. After presenting examples and simulations that dramatize the weaknesses of the measurement-of-mediation approach, we suggest that scholars instead use an approach that is rooted in experimental design. We propose implicit-mediation analysis, which adds and subtracts features of the treatment in ways that implicate some mediators and not others. We illustrate the approach with examples from recently published articles, explain the differences between the approach and other experimental approaches to mediation, and formalize the assumptions and statistical procedures that allow researchers to learn from experiments that encourage changes in mediators.
{"title":"The Failings of Conventional Mediation Analysis and a Design-Based Alternative","authors":"John G. Bullock, D. Green","doi":"10.1177/25152459211047227","DOIUrl":"https://doi.org/10.1177/25152459211047227","url":null,"abstract":"Scholars routinely test mediation claims by using some form of measurement-of-mediation analysis whereby outcomes are regressed on treatments and mediators to assess direct and indirect effects. Indeed, it is rare for an issue of any leading journal of social or personality psychology not to include such an analysis. Statisticians have for decades criticized this method on the grounds that it relies on implausible assumptions, but these criticisms have been largely ignored. After presenting examples and simulations that dramatize the weaknesses of the measurement-of-mediation approach, we suggest that scholars instead use an approach that is rooted in experimental design. We propose implicit-mediation analysis, which adds and subtracts features of the treatment in ways that implicate some mediators and not others. We illustrate the approach with examples from recently published articles, explain the differences between the approach and other experimental approaches to mediation, and formalize the assumptions and statistical procedures that allow researchers to learn from experiments that encourage changes in mediators.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42725623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}