Pub Date : 2023-10-01DOI: 10.1177/25152459231193044
Lydia F. Emery, David M. Silverman, Rebecca M. Carey
In recent years, the field of psychology has increasingly recognized the importance of conducting research with lower-socioeconomic-status (SES) participants. Given that SES can powerfully shape people’s thoughts and actions, socioeconomically diverse samples are necessary for rigorous, generalizable research. However, even when researchers aim to collect data with these samples, they often encounter methodological and practical challenges to recruiting and retaining lower-SES participants in their studies. We propose that there are two key factors to consider when trying to recruit and retain lower-SES participants—trust and accessibility. Researchers can build trust by creating personal connections with participants and communities, paying participants fairly, and considering how participants will view their research. Researchers can enhance accessibility by recruiting in participants’ own communities, tailoring study administration to participants’ circumstances, and being flexible in payment methods. Our goal is to provide recommendations that can help to build a more inclusive science.
{"title":"Conducting Research With People in Lower-Socioeconomic-Status Contexts","authors":"Lydia F. Emery, David M. Silverman, Rebecca M. Carey","doi":"10.1177/25152459231193044","DOIUrl":"https://doi.org/10.1177/25152459231193044","url":null,"abstract":"In recent years, the field of psychology has increasingly recognized the importance of conducting research with lower-socioeconomic-status (SES) participants. Given that SES can powerfully shape people’s thoughts and actions, socioeconomically diverse samples are necessary for rigorous, generalizable research. However, even when researchers aim to collect data with these samples, they often encounter methodological and practical challenges to recruiting and retaining lower-SES participants in their studies. We propose that there are two key factors to consider when trying to recruit and retain lower-SES participants—trust and accessibility. Researchers can build trust by creating personal connections with participants and communities, paying participants fairly, and considering how participants will view their research. Researchers can enhance accessibility by recruiting in participants’ own communities, tailoring study administration to participants’ circumstances, and being flexible in payment methods. Our goal is to provide recommendations that can help to build a more inclusive science.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136152373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1177/25152459231202724
Madeleine Pownall, Charlotte R. Pennington, Emma Norris, Marie Juanchich, David Smailes, Sophie Russell, Debbie Gooch, T. Evans, Sofia Persson, Matthew H. C. Mak, L. Tzavella, R. Monk, Thomas Gough, Christopher S. Y. Benwell, M. Elsherif, Emily Farran, Thomas Gallagher-Mitchell, Luke T. Kendrick, Julia Bahnmueller, E. Nordmann, Mirela Zaneva, K. Gilligan-Lee, Marina Bazhydai, Andrew Jones, Jemma Sedgmond, Iris Holzleitner, James Reynolds, Jo Moss, Daniel Farrelly, A. J. Parker, Kait Clark
Research shows that questionable research practices (QRPs) are present in undergraduate final-year dissertation projects. One entry-level Open Science practice proposed to mitigate QRPs is “study preregistration,” through which researchers outline their research questions, design, method, and analysis plans before data collection and/or analysis. In this study, we aimed to empirically test the effectiveness of preregistration as a pedagogic tool in undergraduate dissertations using a quasi-experimental design. A total of 89 UK psychology students were recruited, including students who preregistered their empirical quantitative dissertation (n = 52; experimental group) and students who did not (n = 37; control group). Attitudes toward statistics, acceptance of QRPs, and perceived understanding of Open Science were measured both before and after dissertation completion. Exploratory measures included capability, opportunity, and motivation to engage with preregistration, measured at Time 1 only. This study was conducted as a Registered Report; Stage 1 protocol: https://osf.io/9hjbw (date of in-principle acceptance: September 21, 2021). Study preregistration did not significantly affect attitudes toward statistics or acceptance of QRPs. However, students who preregistered reported greater perceived understanding of Open Science concepts from Time 1 to Time 2 compared with students who did not preregister. Exploratory analyses indicated that students who preregistered reported significantly greater capability, opportunity, and motivation to preregister. Qualitative responses revealed that preregistration was perceived to improve clarity and organization of the dissertation, prevent QRPs, and promote rigor. Disadvantages and barriers included time, perceived rigidity, and need for training. These results contribute to discussions surrounding embedding Open Science principles into research training.
{"title":"Evaluating the Pedagogical Effectiveness of Study Preregistration in the Undergraduate Dissertation","authors":"Madeleine Pownall, Charlotte R. Pennington, Emma Norris, Marie Juanchich, David Smailes, Sophie Russell, Debbie Gooch, T. Evans, Sofia Persson, Matthew H. C. Mak, L. Tzavella, R. Monk, Thomas Gough, Christopher S. Y. Benwell, M. Elsherif, Emily Farran, Thomas Gallagher-Mitchell, Luke T. Kendrick, Julia Bahnmueller, E. Nordmann, Mirela Zaneva, K. Gilligan-Lee, Marina Bazhydai, Andrew Jones, Jemma Sedgmond, Iris Holzleitner, James Reynolds, Jo Moss, Daniel Farrelly, A. J. Parker, Kait Clark","doi":"10.1177/25152459231202724","DOIUrl":"https://doi.org/10.1177/25152459231202724","url":null,"abstract":"Research shows that questionable research practices (QRPs) are present in undergraduate final-year dissertation projects. One entry-level Open Science practice proposed to mitigate QRPs is “study preregistration,” through which researchers outline their research questions, design, method, and analysis plans before data collection and/or analysis. In this study, we aimed to empirically test the effectiveness of preregistration as a pedagogic tool in undergraduate dissertations using a quasi-experimental design. A total of 89 UK psychology students were recruited, including students who preregistered their empirical quantitative dissertation (n = 52; experimental group) and students who did not (n = 37; control group). Attitudes toward statistics, acceptance of QRPs, and perceived understanding of Open Science were measured both before and after dissertation completion. Exploratory measures included capability, opportunity, and motivation to engage with preregistration, measured at Time 1 only. This study was conducted as a Registered Report; Stage 1 protocol: https://osf.io/9hjbw (date of in-principle acceptance: September 21, 2021). Study preregistration did not significantly affect attitudes toward statistics or acceptance of QRPs. However, students who preregistered reported greater perceived understanding of Open Science concepts from Time 1 to Time 2 compared with students who did not preregister. Exploratory analyses indicated that students who preregistered reported significantly greater capability, opportunity, and motivation to preregister. Qualitative responses revealed that preregistration was perceived to improve clarity and organization of the dissertation, prevent QRPs, and promote rigor. Disadvantages and barriers included time, perceived rigidity, and need for training. These results contribute to discussions surrounding embedding Open Science principles into research training.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"176 1","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139327274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The open-science movement seeks to make research more transparent and accessible. To that end, researchers are increasingly expected to share de-identified data with other scholars for review, reanalysis, and reuse. In psychology, open-science practices have been explored primarily within the context of quantitative data, but demands to share qualitative data are becoming more prevalent. Narrative data are far more challenging to de-identify fully, and because qualitative methods are often used in studies with marginalized, minoritized, and/or traumatized populations, data sharing may pose substantial risks for participants if their information can be later reidentified. To date, there has been little guidance in the literature on how to de-identify qualitative data. To address this gap, we developed a methodological framework for remediating sensitive narrative data. This multiphase process is modeled on common qualitative-coding strategies. The first phase includes consultations with diverse stakeholders and sources to understand reidentifiability risks and data-sharing concerns. The second phase outlines an iterative process for recognizing potentially identifiable information and constructing individualized remediation strategies through group review and consensus. The third phase includes multiple strategies for assessing the validity of the de-identification analyses (i.e., whether the remediated transcripts adequately protect participants’ privacy). We applied this framework to a set of 32 qualitative interviews with sexual-assault survivors. We provide case examples of how blurring and redaction techniques can be used to protect names, dates, locations, trauma histories, help-seeking experiences, and other information about dyadic interactions.
{"title":"Open-Science Guidance for Qualitative Research: An Empirically Validated Approach for De-Identifying Sensitive Narrative Data","authors":"Rebecca Campbell, McKenzie Javorka, Jasmine Engleton, Kathryn Fishwick, Katie Gregory, Rachael Goodman-Williams","doi":"10.1177/25152459231205832","DOIUrl":"https://doi.org/10.1177/25152459231205832","url":null,"abstract":"The open-science movement seeks to make research more transparent and accessible. To that end, researchers are increasingly expected to share de-identified data with other scholars for review, reanalysis, and reuse. In psychology, open-science practices have been explored primarily within the context of quantitative data, but demands to share qualitative data are becoming more prevalent. Narrative data are far more challenging to de-identify fully, and because qualitative methods are often used in studies with marginalized, minoritized, and/or traumatized populations, data sharing may pose substantial risks for participants if their information can be later reidentified. To date, there has been little guidance in the literature on how to de-identify qualitative data. To address this gap, we developed a methodological framework for remediating sensitive narrative data. This multiphase process is modeled on common qualitative-coding strategies. The first phase includes consultations with diverse stakeholders and sources to understand reidentifiability risks and data-sharing concerns. The second phase outlines an iterative process for recognizing potentially identifiable information and constructing individualized remediation strategies through group review and consensus. The third phase includes multiple strategies for assessing the validity of the de-identification analyses (i.e., whether the remediated transcripts adequately protect participants’ privacy). We applied this framework to a set of 32 qualitative interviews with sexual-assault survivors. We provide case examples of how blurring and redaction techniques can be used to protect names, dates, locations, trauma histories, help-seeking experiences, and other information about dyadic interactions.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"4 1","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139331023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1177/25152459231197611
Lisa Bucher, Tanja Burgard, Ulrich S. Tran, Gerhard M. Prinz, Michael Bosnjak, Martin Voracek
Newly developed, web-based, open-repository concepts, such as community-augmented meta-analysis (CAMA), provide open access to fulfill the needs for transparency and timeliness of synthesized evidence. The main idea of CAMA is to keep meta-analyses up-to-date by allowing the research community to include new evidence continuously. In 2021, the Leibniz Institute for Psychology released a platform, PsychOpen CAMA, which serves as a publication format for CAMAs in all fields of psychology. The present work serves as a tutorial on implementing and using a CAMA in PsychOpen CAMA from a data-provider perspective, using six large-scale meta-analytic data sets on the dark triad of personality as a working example. First, the processes of data contribution and implementation of either new or updated existing data sets are summarized. Furthermore, a step-by-step tutorial on using and interpreting CAMAs guides the reader through the web application. Finally, the tutorial outlines the major benefits and the remaining challenges of CAMAs in PsychOpen CAMA.
新开发的基于网络的开放存储库概念,如社区增强元分析(CAMA),提供了开放获取,以满足对合成证据的透明度和及时性的需求。CAMA的主要思想是通过允许研究界不断纳入新的证据来保持元分析的最新状态。2021年,莱布尼茨心理学研究所(Leibniz Institute for Psychology)发布了一个名为PsychOpen CAMA的平台,作为心理学所有领域的CAMA的出版格式。本研究从数据提供者的角度,以人格黑暗三联征的六个大规模元分析数据集为例,作为在PsychOpen CAMA中实施和使用CAMA的教程。首先,总结了新数据集或更新现有数据集的数据贡献和实现过程。此外,关于使用和解释cama的逐步教程指导读者通过web应用程序。最后,本教程概述了在PsychOpen CAMA中CAMA的主要优点和剩余的挑战。
{"title":"Keeping Meta-Analyses Alive and Well: A Tutorial on Implementing and Using Community-Augmented Meta-Analyses in PsychOpen CAMA","authors":"Lisa Bucher, Tanja Burgard, Ulrich S. Tran, Gerhard M. Prinz, Michael Bosnjak, Martin Voracek","doi":"10.1177/25152459231197611","DOIUrl":"https://doi.org/10.1177/25152459231197611","url":null,"abstract":"Newly developed, web-based, open-repository concepts, such as community-augmented meta-analysis (CAMA), provide open access to fulfill the needs for transparency and timeliness of synthesized evidence. The main idea of CAMA is to keep meta-analyses up-to-date by allowing the research community to include new evidence continuously. In 2021, the Leibniz Institute for Psychology released a platform, PsychOpen CAMA, which serves as a publication format for CAMAs in all fields of psychology. The present work serves as a tutorial on implementing and using a CAMA in PsychOpen CAMA from a data-provider perspective, using six large-scale meta-analytic data sets on the dark triad of personality as a working example. First, the processes of data contribution and implementation of either new or updated existing data sets are summarized. Furthermore, a step-by-step tutorial on using and interpreting CAMAs guides the reader through the web application. Finally, the tutorial outlines the major benefits and the remaining challenges of CAMAs in PsychOpen CAMA.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136152683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1177/25152459231183919
Michael Yeomans, F. Katelynn Boland, Hanne K. Collins, Nicole Abi-Esber, Alison Wood Brooks
Conversation—a verbal interaction between two or more people—is a complex, pervasive, and consequential human behavior. Conversations have been studied across many academic disciplines. However, advances in recording and analysis techniques over the last decade have allowed researchers to more directly and precisely examine conversations in natural contexts and at a larger scale than ever before, and these advances open new paths to understand humanity and the social world. Existing reviews of text analysis and conversation research have focused on text generated by a single author (e.g., product reviews, news articles, and public speeches) and thus leave open questions about the unique challenges presented by interactive conversation data (i.e., dialogue). In this article, we suggest approaches to overcome common challenges in the workflow of conversation science, including recording and transcribing conversations, structuring data (to merge turn-level and speaker-level data sets), extracting and aggregating linguistic features, estimating effects, and sharing data. This practical guide is meant to shed light on current best practices and empower more researchers to study conversations more directly—to expand the community of conversation scholars and contribute to a greater cumulative scientific understanding of the social world.
{"title":"A Practical Guide to Conversation Research: How to Study What People Say to Each Other","authors":"Michael Yeomans, F. Katelynn Boland, Hanne K. Collins, Nicole Abi-Esber, Alison Wood Brooks","doi":"10.1177/25152459231183919","DOIUrl":"https://doi.org/10.1177/25152459231183919","url":null,"abstract":"Conversation—a verbal interaction between two or more people—is a complex, pervasive, and consequential human behavior. Conversations have been studied across many academic disciplines. However, advances in recording and analysis techniques over the last decade have allowed researchers to more directly and precisely examine conversations in natural contexts and at a larger scale than ever before, and these advances open new paths to understand humanity and the social world. Existing reviews of text analysis and conversation research have focused on text generated by a single author (e.g., product reviews, news articles, and public speeches) and thus leave open questions about the unique challenges presented by interactive conversation data (i.e., dialogue). In this article, we suggest approaches to overcome common challenges in the workflow of conversation science, including recording and transcribing conversations, structuring data (to merge turn-level and speaker-level data sets), extracting and aggregating linguistic features, estimating effects, and sharing data. This practical guide is meant to shed light on current best practices and empower more researchers to study conversations more directly—to expand the community of conversation scholars and contribute to a greater cumulative scientific understanding of the social world.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136247077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1177/25152459231193334
Karoline B. S. Huth, Jill de Ron, Anneke E. Goudriaan, Judy Luigjes, Reza Mohammadi, Ruth J. van Holst, Eric-Jan Wagenmakers, Maarten Marsman
Network psychometrics is a new direction in psychological research that conceptualizes psychological constructs as systems of interacting variables. In network analysis, variables are represented as nodes, and their interactions yield (partial) associations. Current estimation methods mostly use a frequentist approach, which does not allow for proper uncertainty quantification of the model and its parameters. Here, we outline a Bayesian approach to network analysis that offers three main benefits. In particular, applied researchers can use Bayesian methods to (1) determine structure uncertainty, (2) obtain evidence for edge inclusion and exclusion (i.e., distinguish conditional dependence or independence between variables), and (3) quantify parameter precision. In this article, we provide a conceptual introduction to Bayesian inference, describe how researchers can facilitate the three benefits for networks, and review the available R packages. In addition, we present two user-friendly software solutions: a new R package, easybgm, for fitting, extracting, and visualizing the Bayesian analysis of networks and a graphical user interface implementation in JASP. The methodology is illustrated with a worked-out example of a network of personality traits and mental health.
{"title":"Bayesian Analysis of Cross-Sectional Networks: A Tutorial in R and JASP","authors":"Karoline B. S. Huth, Jill de Ron, Anneke E. Goudriaan, Judy Luigjes, Reza Mohammadi, Ruth J. van Holst, Eric-Jan Wagenmakers, Maarten Marsman","doi":"10.1177/25152459231193334","DOIUrl":"https://doi.org/10.1177/25152459231193334","url":null,"abstract":"Network psychometrics is a new direction in psychological research that conceptualizes psychological constructs as systems of interacting variables. In network analysis, variables are represented as nodes, and their interactions yield (partial) associations. Current estimation methods mostly use a frequentist approach, which does not allow for proper uncertainty quantification of the model and its parameters. Here, we outline a Bayesian approach to network analysis that offers three main benefits. In particular, applied researchers can use Bayesian methods to (1) determine structure uncertainty, (2) obtain evidence for edge inclusion and exclusion (i.e., distinguish conditional dependence or independence between variables), and (3) quantify parameter precision. In this article, we provide a conceptual introduction to Bayesian inference, describe how researchers can facilitate the three benefits for networks, and review the available R packages. In addition, we present two user-friendly software solutions: a new R package, easybgm, for fitting, extracting, and visualizing the Bayesian analysis of networks and a graphical user interface implementation in JASP. The methodology is illustrated with a worked-out example of a network of personality traits and mental health.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136093306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1177/25152459231197605
Wijnand A. P. van Tilburg, Lennert J A van Tilburg
Psychological science is moving toward further specification of effect sizes when formulating hypotheses, performing power analyses, and considering the relevance of findings. This development has sparked an appreciation for the wider context in which such effect sizes are found because the importance assigned to specific sizes may vary from situation to situation. We add to this development a crucial but in psychology hitherto underappreciated contingency: There are mathematical limits to the magnitudes that population effect sizes can take within the common multivariate context in which psychology is situated, and these limits can be far more restrictive than typically assumed. The implication is that some hypothesized or preregistered effect sizes may be impossible. At the same time, these restrictions offer a way of statistically triangulating the plausible range of unknown effect sizes. We explain the reason for the existence of these limits, illustrate how to identify them, and offer recommendations and tools for improving hypothesized effect sizes by exploiting the broader multivariate context in which they occur.
{"title":"Impossible Hypotheses and Effect-Size Limits","authors":"Wijnand A. P. van Tilburg, Lennert J A van Tilburg","doi":"10.1177/25152459231197605","DOIUrl":"https://doi.org/10.1177/25152459231197605","url":null,"abstract":"Psychological science is moving toward further specification of effect sizes when formulating hypotheses, performing power analyses, and considering the relevance of findings. This development has sparked an appreciation for the wider context in which such effect sizes are found because the importance assigned to specific sizes may vary from situation to situation. We add to this development a crucial but in psychology hitherto underappreciated contingency: There are mathematical limits to the magnitudes that population effect sizes can take within the common multivariate context in which psychology is situated, and these limits can be far more restrictive than typically assumed. The implication is that some hypothesized or preregistered effect sizes may be impossible. At the same time, these restrictions offer a way of statistically triangulating the plausible range of unknown effect sizes. We explain the reason for the existence of these limits, illustrate how to identify them, and offer recommendations and tools for improving hypothesized effect sizes by exploiting the broader multivariate context in which they occur.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"1 1","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139326360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1177/25152459231182318
S. Hoogeveen, S. Berkhout, Q. Gronau, E. Wagenmakers, J. Haaf
Team-science projects have become the “gold standard” for assessing the replicability and variability of key findings in psychological science. However, we believe the typical meta-analytic approach in these projects fails to match the wealth of collected data. Instead, we advocate the use of Bayesian hierarchical modeling for team-science projects, potentially extended in a multiverse analysis. We illustrate this full-scale analysis by applying it to the recently published Many Labs 4 project. This project aimed to replicate the mortality-salience effect—that being reminded of one’s own death strengthens the own cultural identity. In a multiverse analysis, we assess the robustness of the results with varying data-inclusion criteria and prior settings. Bayesian model comparison results largely converge to a common conclusion: The data provide evidence against a mortality-salience effect across the majority of our analyses. We issue general recommendations to facilitate full-scale analyses in team-science projects.
{"title":"Improving Statistical Analysis in Team Science: The Case of a Bayesian Multiverse of Many Labs 4","authors":"S. Hoogeveen, S. Berkhout, Q. Gronau, E. Wagenmakers, J. Haaf","doi":"10.1177/25152459231182318","DOIUrl":"https://doi.org/10.1177/25152459231182318","url":null,"abstract":"Team-science projects have become the “gold standard” for assessing the replicability and variability of key findings in psychological science. However, we believe the typical meta-analytic approach in these projects fails to match the wealth of collected data. Instead, we advocate the use of Bayesian hierarchical modeling for team-science projects, potentially extended in a multiverse analysis. We illustrate this full-scale analysis by applying it to the recently published Many Labs 4 project. This project aimed to replicate the mortality-salience effect—that being reminded of one’s own death strengthens the own cultural identity. In a multiverse analysis, we assess the robustness of the results with varying data-inclusion criteria and prior settings. Bayesian model comparison results largely converge to a common conclusion: The data provide evidence against a mortality-salience effect across the majority of our analyses. We issue general recommendations to facilitate full-scale analyses in team-science projects.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43159473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1177/25152459231160103
Kristina Wiebels, David Moreau
In scientific communication, figures are typically rendered as static displays. This often prevents active exploration of the underlying data, for example, to gauge the influence of particular data points or of particular analytic choices. Yet modern data-visualization tools, from animated plots to interactive notebooks and reactive web applications, allow psychologists to share and present their findings in dynamic and transparent ways. In this tutorial, we present a number of recent developments to build interactivity and animations into scientific communication and publications using examples and illustrations in the R language (basic knowledge of R is assumed). In particular, we discuss when and how to build dynamic figures, with step-by-step reproducible code that can easily be extended to the reader’s own projects. We illustrate how interactivity and animations can facilitate insight and communication across a project life cycle—from initial exchanges and discussions in a team to peer review and final publication—and provide a number of recommendations to use dynamic visualizations effectively. We close with a reflection on how the scientific-publishing model is currently evolving and consider the challenges and opportunities this shift might bring for data visualization.
{"title":"Dynamic Data Visualizations to Enhance Insight and Communication Across the Life Cycle of a Scientific Project","authors":"Kristina Wiebels, David Moreau","doi":"10.1177/25152459231160103","DOIUrl":"https://doi.org/10.1177/25152459231160103","url":null,"abstract":"In scientific communication, figures are typically rendered as static displays. This often prevents active exploration of the underlying data, for example, to gauge the influence of particular data points or of particular analytic choices. Yet modern data-visualization tools, from animated plots to interactive notebooks and reactive web applications, allow psychologists to share and present their findings in dynamic and transparent ways. In this tutorial, we present a number of recent developments to build interactivity and animations into scientific communication and publications using examples and illustrations in the R language (basic knowledge of R is assumed). In particular, we discuss when and how to build dynamic figures, with step-by-step reproducible code that can easily be extended to the reader’s own projects. We illustrate how interactivity and animations can facilitate insight and communication across a project life cycle—from initial exchanges and discussions in a team to peer review and final publication—and provide a number of recommendations to use dynamic visualizations effectively. We close with a reflection on how the scientific-publishing model is currently evolving and consider the challenges and opportunities this shift might bring for data visualization.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135454671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1177/25152459231187988
Olmo R. van den Akker, Marcel A. L. M. van Assen, Manon Enting, Myrthe de Jonge, How Hwee Ong, Franziska Rüffer, Martijn Schoenmakers, Andrea H. Stoevenbelt, Jelte M. Wicherts, Marjan Bakker
In this study, we assessed the extent of selective hypothesis reporting in psychological research by comparing the hypotheses found in a set of 459 preregistrations with the hypotheses found in the corresponding articles. We found that more than half of the preregistered studies we assessed contained omitted hypotheses ( N = 224; 52%) or added hypotheses ( N = 227; 57%), and about one-fifth of studies contained hypotheses with a direction change ( N = 79; 18%). We found only a small number of studies with hypotheses that were demoted from primary to secondary importance ( N = 2; 1%) and no studies with hypotheses that were promoted from secondary to primary importance. In all, 60% of studies included at least one hypothesis in one or more of these categories, indicating a substantial bias in presenting and selecting hypotheses by researchers and/or reviewers/editors. Contrary to our expectations, we did not find sufficient evidence that added hypotheses and changed hypotheses were more likely to be statistically significant than nonselectively reported hypotheses. For the other types of selective hypothesis reporting, we likely did not have sufficient statistical power to test for a relationship with statistical significance. Finally, we found that replication studies were less likely to include selectively reported hypotheses than original studies. In all, selective hypothesis reporting is problematically common in psychological research. We urge researchers, reviewers, and editors to ensure that hypotheses outlined in preregistrations are clearly formulated and accurately presented in the corresponding articles.
{"title":"Selective Hypothesis Reporting in Psychology: Comparing Preregistrations and Corresponding Publications","authors":"Olmo R. van den Akker, Marcel A. L. M. van Assen, Manon Enting, Myrthe de Jonge, How Hwee Ong, Franziska Rüffer, Martijn Schoenmakers, Andrea H. Stoevenbelt, Jelte M. Wicherts, Marjan Bakker","doi":"10.1177/25152459231187988","DOIUrl":"https://doi.org/10.1177/25152459231187988","url":null,"abstract":"In this study, we assessed the extent of selective hypothesis reporting in psychological research by comparing the hypotheses found in a set of 459 preregistrations with the hypotheses found in the corresponding articles. We found that more than half of the preregistered studies we assessed contained omitted hypotheses ( N = 224; 52%) or added hypotheses ( N = 227; 57%), and about one-fifth of studies contained hypotheses with a direction change ( N = 79; 18%). We found only a small number of studies with hypotheses that were demoted from primary to secondary importance ( N = 2; 1%) and no studies with hypotheses that were promoted from secondary to primary importance. In all, 60% of studies included at least one hypothesis in one or more of these categories, indicating a substantial bias in presenting and selecting hypotheses by researchers and/or reviewers/editors. Contrary to our expectations, we did not find sufficient evidence that added hypotheses and changed hypotheses were more likely to be statistically significant than nonselectively reported hypotheses. For the other types of selective hypothesis reporting, we likely did not have sufficient statistical power to test for a relationship with statistical significance. Finally, we found that replication studies were less likely to include selectively reported hypotheses than original studies. In all, selective hypothesis reporting is problematically common in psychological research. We urge researchers, reviewers, and editors to ensure that hypotheses outlined in preregistrations are clearly formulated and accurately presented in the corresponding articles.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135806862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}