Pub Date : 2023-01-01DOI: 10.1177/25152459221147260
Daniel S. Quintana
Meta-analysis is a popular approach in the psychological sciences for synthesizing data across studies. However, the credibility of meta-analysis outcomes depends on the evidential value of studies included in the body of evidence used for data synthesis. One important consideration for determining a study’s evidential value is the statistical power of the study’s design/statistical test combination for detecting hypothetical effect sizes of interest. Studies with a design/test combination that cannot reliably detect a wide range of effect sizes are more susceptible to questionable research practices and exaggerated effect sizes. Therefore, determining the statistical power for design/test combinations for studies included in meta-analyses can help researchers make decisions regarding confidence in the body of evidence. Because the one true population effect size is unknown when hypothesis testing, an alternative approach is to determine statistical power for a range of hypothetical effect sizes. This tutorial introduces the metameta R package and web app, which facilitates the straightforward calculation and visualization of study-level statistical power in meta-analyses for a range of hypothetical effect sizes. Readers will be shown how to reanalyze data using information typically presented in meta-analysis forest plots or tables and how to integrate the metameta package when reporting novel meta-analyses. A step-by-step companion screencast video tutorial is also provided to assist readers using the R package.
{"title":"A Guide for Calculating Study-Level Statistical Power for Meta-Analyses","authors":"Daniel S. Quintana","doi":"10.1177/25152459221147260","DOIUrl":"https://doi.org/10.1177/25152459221147260","url":null,"abstract":"Meta-analysis is a popular approach in the psychological sciences for synthesizing data across studies. However, the credibility of meta-analysis outcomes depends on the evidential value of studies included in the body of evidence used for data synthesis. One important consideration for determining a study’s evidential value is the statistical power of the study’s design/statistical test combination for detecting hypothetical effect sizes of interest. Studies with a design/test combination that cannot reliably detect a wide range of effect sizes are more susceptible to questionable research practices and exaggerated effect sizes. Therefore, determining the statistical power for design/test combinations for studies included in meta-analyses can help researchers make decisions regarding confidence in the body of evidence. Because the one true population effect size is unknown when hypothesis testing, an alternative approach is to determine statistical power for a range of hypothetical effect sizes. This tutorial introduces the metameta R package and web app, which facilitates the straightforward calculation and visualization of study-level statistical power in meta-analyses for a range of hypothetical effect sizes. Readers will be shown how to reanalyze data using information typically presented in meta-analysis forest plots or tables and how to integrate the metameta package when reporting novel meta-analyses. A step-by-step companion screencast video tutorial is also provided to assist readers using the R package.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41686756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/25152459231151944
D. Hallinan, Franziska Boehm, Annika Külpmann, M. Elson
Psychological research often involves the collection and processing of personal data from human research participants. The European General Data Protection Regulation (GDPR) applies, as a rule, to psychological research conducted on personal data in the European Economic Area (EEA)—and even, in certain cases, to psychological research conducted on personal data outside the EEA. The GDPR elaborates requirements concerning the forms of information that should be communicated to research participants whenever personal data are collected directly from them. There is a general norm that informed consent should be obtained before psychological research involving the collection of personal data directly from research participants is conducted. The information required to be provided under the GDPR is normally communicated in the context of an informed consent procedure. There is reason to believe, however, that the information required by the GDPR may not always be provided. Our aim in this tutorial is thus to provide general practical guidance to psychological researchers allowing them to understand the forms of information that must be provided to research participants under the GDPR in informed consent procedures.
{"title":"Information Provision for Informed Consent Procedures in Psychological Research Under the General Data Protection Regulation: A Practical Guide","authors":"D. Hallinan, Franziska Boehm, Annika Külpmann, M. Elson","doi":"10.1177/25152459231151944","DOIUrl":"https://doi.org/10.1177/25152459231151944","url":null,"abstract":"Psychological research often involves the collection and processing of personal data from human research participants. The European General Data Protection Regulation (GDPR) applies, as a rule, to psychological research conducted on personal data in the European Economic Area (EEA)—and even, in certain cases, to psychological research conducted on personal data outside the EEA. The GDPR elaborates requirements concerning the forms of information that should be communicated to research participants whenever personal data are collected directly from them. There is a general norm that informed consent should be obtained before psychological research involving the collection of personal data directly from research participants is conducted. The information required to be provided under the GDPR is normally communicated in the context of an informed consent procedure. There is reason to believe, however, that the information required by the GDPR may not always be provided. Our aim in this tutorial is thus to provide general practical guidance to psychological researchers allowing them to understand the forms of information that must be provided to research participants under the GDPR in informed consent procedures.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44369891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/25152459231156419
Mark Huff, Elke C. Bongartz
Research-data availability contributes to the transparency of the research process and the credibility of educational-psychology research and science in general. Recently, there have been many initiatives to increase the availability and quality of research data. Many research institutions have adopted research-data policies. This increased awareness might have raised the sharing of research data in empirical articles. To test this idea, we coded 1,242 publications from six educational-psychology journals and the psychological journal Cognition (as a baseline) published in 2018 and 2020. Research-data availability was low (3.85% compared with 62.74% in Cognition) but has increased from 0.32% (2018) to 7.16% (2020). However, neither the data-transparency level of the journal nor the existence of an official research-data policy on the level of the corresponding author’s institution was related to research-data availability. We discuss the consequences of these findings for institutional research-data-management processes.
{"title":"Low Research-Data Availability in Educational-Psychology Journals: No Indication of Effective Research-Data Policies","authors":"Mark Huff, Elke C. Bongartz","doi":"10.1177/25152459231156419","DOIUrl":"https://doi.org/10.1177/25152459231156419","url":null,"abstract":"Research-data availability contributes to the transparency of the research process and the credibility of educational-psychology research and science in general. Recently, there have been many initiatives to increase the availability and quality of research data. Many research institutions have adopted research-data policies. This increased awareness might have raised the sharing of research data in empirical articles. To test this idea, we coded 1,242 publications from six educational-psychology journals and the psychological journal Cognition (as a baseline) published in 2018 and 2020. Research-data availability was low (3.85% compared with 62.74% in Cognition) but has increased from 0.32% (2018) to 7.16% (2020). However, neither the data-transparency level of the journal nor the existence of an official research-data policy on the level of the corresponding author’s institution was related to research-data availability. We discuss the consequences of these findings for institutional research-data-management processes.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47393639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/25152459231158378
Richard E. Lucas
The cross-lagged panel model (CLPM) is a widely used technique for examining reciprocal causal effects using longitudinal data. Critics of the CLPM have noted that by failing to account for certain person-level associations, estimates of these causal effects can be biased. Because of this, models that incorporate stable-trait components (e.g., the random-intercept CLPM) have become popular alternatives. Debates about the merits of the CLPM have continued, however, with some researchers arguing that the CLPM is more appropriate than modern alternatives for examining common psychological questions. In this article, I discuss the ways that these defenses of the CLPM fail to acknowledge well-known limitations of the model. I propose some possible sources of confusion regarding these models and provide alternative ways of thinking about the problems with the CLPM. I then show in simulated data that with realistic assumptions, the CLPM is very likely to find spurious cross-lagged effects when they do not exist and can sometimes underestimate these effects when they do exist.
{"title":"Why the Cross-Lagged Panel Model Is Almost Never the Right Choice","authors":"Richard E. Lucas","doi":"10.1177/25152459231158378","DOIUrl":"https://doi.org/10.1177/25152459231158378","url":null,"abstract":"The cross-lagged panel model (CLPM) is a widely used technique for examining reciprocal causal effects using longitudinal data. Critics of the CLPM have noted that by failing to account for certain person-level associations, estimates of these causal effects can be biased. Because of this, models that incorporate stable-trait components (e.g., the random-intercept CLPM) have become popular alternatives. Debates about the merits of the CLPM have continued, however, with some researchers arguing that the CLPM is more appropriate than modern alternatives for examining common psychological questions. In this article, I discuss the ways that these defenses of the CLPM fail to acknowledge well-known limitations of the model. I propose some possible sources of confusion regarding these models and provide alternative ways of thinking about the problems with the CLPM. I then show in simulated data that with realistic assumptions, the CLPM is very likely to find spurious cross-lagged effects when they do not exist and can sometimes underestimate these effects when they do exist.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46718878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/25152459221149735
S. Kianersi, S. Grant, Kevin Naaman, B. Henschel, D. Mellor, S. Apte, J. Deyoe, P. Eze, Cuiqiong Huo, Bethany L. Lavender, Nicha Taschanchai, Xinlu Zhang, E. Mayo-Wilson
The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor quantifies the extent to which journals adopt TOP in their policies, but there is no validated instrument to assess TOP implementation. Moreover, raters might assess the same policies differently. Instruments with objective questions are needed to assess TOP implementation reliably. In this study, we examined the interrater reliability and agreement of three new instruments for assessing TOP implementation in journal policies (instructions to authors), procedures (manuscript-submission systems), and practices (journal articles). Independent raters used these instruments to assess 339 journals from the behavioral, social, and health sciences. We calculated interrater agreement (IRA) and interrater reliability (IRR) for each of 10 TOP standards and for each question in our instruments (13 policy questions, 26 procedure questions, 14 practice questions). IRA was high for each standard in TOP; however, IRA might have been high by chance because most standards were not implemented by most journals. No standard had “excellent” IRR. Three standards had “good,” one had “moderate,” and six had “poor” IRR. Likewise, IRA was high for most instrument questions, and IRR was moderate or worse for 62%, 54%, and 43% of policy, procedure, and practice questions, respectively. Although results might be explained by limitations in our process, instruments, and team, we are unaware of better methods for assessing TOP implementation. Clarifying distinctions among different levels of implementation for each TOP standard might improve its implementation and assessment (study protocol: https://doi.org/10.1186/s41073-021-00112-8).
{"title":"Evaluating Implementation of the Transparency and Openness Promotion Guidelines: Reliability of Instruments to Assess Journal Policies, Procedures, and Practices","authors":"S. Kianersi, S. Grant, Kevin Naaman, B. Henschel, D. Mellor, S. Apte, J. Deyoe, P. Eze, Cuiqiong Huo, Bethany L. Lavender, Nicha Taschanchai, Xinlu Zhang, E. Mayo-Wilson","doi":"10.1177/25152459221149735","DOIUrl":"https://doi.org/10.1177/25152459221149735","url":null,"abstract":"The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor quantifies the extent to which journals adopt TOP in their policies, but there is no validated instrument to assess TOP implementation. Moreover, raters might assess the same policies differently. Instruments with objective questions are needed to assess TOP implementation reliably. In this study, we examined the interrater reliability and agreement of three new instruments for assessing TOP implementation in journal policies (instructions to authors), procedures (manuscript-submission systems), and practices (journal articles). Independent raters used these instruments to assess 339 journals from the behavioral, social, and health sciences. We calculated interrater agreement (IRA) and interrater reliability (IRR) for each of 10 TOP standards and for each question in our instruments (13 policy questions, 26 procedure questions, 14 practice questions). IRA was high for each standard in TOP; however, IRA might have been high by chance because most standards were not implemented by most journals. No standard had “excellent” IRR. Three standards had “good,” one had “moderate,” and six had “poor” IRR. Likewise, IRA was high for most instrument questions, and IRR was moderate or worse for 62%, 54%, and 43% of policy, procedure, and practice questions, respectively. Although results might be explained by limitations in our process, instruments, and team, we are unaware of better methods for assessing TOP implementation. Clarifying distinctions among different levels of implementation for each TOP standard might improve its implementation and assessment (study protocol: https://doi.org/10.1186/s41073-021-00112-8).","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43251251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1177/25152459221120427
T. Stanley, Hristos Doucouliagos, J. Ioannidis
New meta-regression methods are introduced that identify whether the magnitude of heterogeneity across study findings is correlated with their standard errors. Evidence from dozens of meta-analyses finds robust evidence of this correlation and that small-sample studies typically have higher heterogeneity. This correlated heterogeneity violates the random-effects (RE) model of additive and independent heterogeneity. When small studies not only have inadequate statistical power but also high heterogeneity, their scientific contribution is even more dubious. When the heterogeneity variance is correlated with the sampling-error variance to the degree we find, simulations show that RE is dominated by an alternative weighted average, the unrestricted weighted least squares (UWLS). Meta-research evidence combined with simulations establish that UWLS should replace RE as the conventional meta-analysis summary of psychological research.
{"title":"Beyond Random Effects: When Small-Study Findings Are More Heterogeneous","authors":"T. Stanley, Hristos Doucouliagos, J. Ioannidis","doi":"10.1177/25152459221120427","DOIUrl":"https://doi.org/10.1177/25152459221120427","url":null,"abstract":"New meta-regression methods are introduced that identify whether the magnitude of heterogeneity across study findings is correlated with their standard errors. Evidence from dozens of meta-analyses finds robust evidence of this correlation and that small-sample studies typically have higher heterogeneity. This correlated heterogeneity violates the random-effects (RE) model of additive and independent heterogeneity. When small studies not only have inadequate statistical power but also high heterogeneity, their scientific contribution is even more dubious. When the heterogeneity variance is correlated with the sampling-error variance to the degree we find, simulations show that RE is dominated by an alternative weighted average, the unrestricted weighted least squares (UWLS). Meta-research evidence combined with simulations establish that UWLS should replace RE as the conventional meta-analysis summary of psychological research.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43809289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1177/25152459221120217
R. C. Fraley, Jia Y. Chong, Kyle A. Baacke, A. Greco, Hanxiong Guan, S. Vazire
Scholars and institutions commonly use impact factors to evaluate the quality of empirical research. However, a number of findings published in journals with high impact factors have failed to replicate, suggesting that impact alone may not be an accurate indicator of quality. Fraley and Vazire proposed an alternative index, the N-pact factor, which indexes the median sample size of published studies, providing a narrow but relevant indicator of research quality. In the present research, we expand on the original report by examining the N-pact factor of social/personality-psychology journals between 2011 and 2019, incorporating additional journals and accounting for study design (i.e., between persons, repeated measures, and mixed). There was substantial variation in the sample sizes used in studies published in different journals. Journals that emphasized personality processes and individual differences had larger N-pact factors than journals that emphasized social-psychological processes. Moreover, N-pact factors were largely independent of traditional markers of impact. Although the majority of journals in 2011 published studies that were not well powered to detect an effect of ρ = .20, this situation had improved considerably by 2019. In 2019, eight of the nine journals we sampled published studies that were, on average, powered at 80% or higher to detect such an effect. After decades of unheeded warnings from methodologists about the dangers of small-sample designs, the field of social/personality psychology has begun to use larger samples. We hope the N-pact factor will be supplemented by other indices that can be used as alternatives to improve further the evaluation of research.
{"title":"Journal N-Pact Factors From 2011 to 2019: Evaluating the Quality of Social/Personality Journals With Respect to Sample Size and Statistical Power","authors":"R. C. Fraley, Jia Y. Chong, Kyle A. Baacke, A. Greco, Hanxiong Guan, S. Vazire","doi":"10.1177/25152459221120217","DOIUrl":"https://doi.org/10.1177/25152459221120217","url":null,"abstract":"Scholars and institutions commonly use impact factors to evaluate the quality of empirical research. However, a number of findings published in journals with high impact factors have failed to replicate, suggesting that impact alone may not be an accurate indicator of quality. Fraley and Vazire proposed an alternative index, the N-pact factor, which indexes the median sample size of published studies, providing a narrow but relevant indicator of research quality. In the present research, we expand on the original report by examining the N-pact factor of social/personality-psychology journals between 2011 and 2019, incorporating additional journals and accounting for study design (i.e., between persons, repeated measures, and mixed). There was substantial variation in the sample sizes used in studies published in different journals. Journals that emphasized personality processes and individual differences had larger N-pact factors than journals that emphasized social-psychological processes. Moreover, N-pact factors were largely independent of traditional markers of impact. Although the majority of journals in 2011 published studies that were not well powered to detect an effect of ρ = .20, this situation had improved considerably by 2019. In 2019, eight of the nine journals we sampled published studies that were, on average, powered at 80% or higher to detect such an effect. After decades of unheeded warnings from methodologists about the dangers of small-sample designs, the field of social/personality psychology has begun to use larger samples. We hope the N-pact factor will be supplemented by other indices that can be used as alternatives to improve further the evaluation of research.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48561207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/25152459221109259
František Bartoš, Maximilian Maier, Daniel S. Quintana, E. Wagenmakers
Meta-analyses are essential for cumulative science, but their validity can be compromised by publication bias. To mitigate the impact of publication bias, one may apply publication-bias-adjustment techniques such as precision-effect test and precision-effect estimate with standard errors (PET-PEESE) and selection models. These methods, implemented in JASP and R, allow researchers without programming experience to conduct state-of-the-art publication-bias-adjusted meta-analysis. In this tutorial, we demonstrate how to conduct a publication-bias-adjusted meta-analysis in JASP and R and interpret the results. First, we explain two frequentist bias-correction methods: PET-PEESE and selection models. Second, we introduce robust Bayesian meta-analysis, a Bayesian approach that simultaneously considers both PET-PEESE and selection models. We illustrate the methodology on an example data set, provide an instructional video (https://bit.ly/pubbias) and an R-markdown script (https://osf.io/uhaew/), and discuss the interpretation of the results. Finally, we include concrete guidance on reporting the meta-analytic results in an academic article.
{"title":"Adjusting for Publication Bias in JASP and R: Selection Models, PET-PEESE, and Robust Bayesian Meta-Analysis","authors":"František Bartoš, Maximilian Maier, Daniel S. Quintana, E. Wagenmakers","doi":"10.1177/25152459221109259","DOIUrl":"https://doi.org/10.1177/25152459221109259","url":null,"abstract":"Meta-analyses are essential for cumulative science, but their validity can be compromised by publication bias. To mitigate the impact of publication bias, one may apply publication-bias-adjustment techniques such as precision-effect test and precision-effect estimate with standard errors (PET-PEESE) and selection models. These methods, implemented in JASP and R, allow researchers without programming experience to conduct state-of-the-art publication-bias-adjusted meta-analysis. In this tutorial, we demonstrate how to conduct a publication-bias-adjusted meta-analysis in JASP and R and interpret the results. First, we explain two frequentist bias-correction methods: PET-PEESE and selection models. Second, we introduce robust Bayesian meta-analysis, a Bayesian approach that simultaneously considers both PET-PEESE and selection models. We illustrate the methodology on an example data set, provide an instructional video (https://bit.ly/pubbias) and an R-markdown script (https://osf.io/uhaew/), and discuss the interpretation of the results. Finally, we include concrete guidance on reporting the meta-analytic results in an academic article.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"5 1","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41756707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/25152459221101816
Tobias Ebert, Lars Mewes, F. Götz, Thomas Brenner
Psychologists of many subfields are becoming increasingly interested in the geographical distribution of psychological phenomena. An integral part of this new stream of geo-psychological studies is to visualize spatial distributions of psychological phenomena in maps. However, most psychologists are not trained in visualizing spatial data. As a result, almost all existing geo-psychological studies rely on the most basic mapping technique: color-coding disaggregated data (i.e., grouping individuals into predefined spatial units and then mapping out average scores across these spatial units). Although this basic mapping technique is not wrong, it often leaves unleveraged potential to effectively visualize spatial patterns. The aim of this tutorial is to introduce psychologists to an alternative, easy-to-use mapping technique: distance-based weighting (i.e., calculating area estimates that represent distance-weighted averages of all measurement locations). We outline the basic idea of distance-based weighting and explain how to implement this technique so that it is effective for geo-psychological research. Using large-scale mental-health data from the United States (N = 2,058,249), we empirically demonstrate how distance-based weighting may complement the commonly used basic mapping technique. We provide fully annotated R code and open access to all data used in our analyses.
{"title":"Effective Maps, Easily Done: Visualizing Geo-Psychological Differences Using Distance Weights","authors":"Tobias Ebert, Lars Mewes, F. Götz, Thomas Brenner","doi":"10.1177/25152459221101816","DOIUrl":"https://doi.org/10.1177/25152459221101816","url":null,"abstract":"Psychologists of many subfields are becoming increasingly interested in the geographical distribution of psychological phenomena. An integral part of this new stream of geo-psychological studies is to visualize spatial distributions of psychological phenomena in maps. However, most psychologists are not trained in visualizing spatial data. As a result, almost all existing geo-psychological studies rely on the most basic mapping technique: color-coding disaggregated data (i.e., grouping individuals into predefined spatial units and then mapping out average scores across these spatial units). Although this basic mapping technique is not wrong, it often leaves unleveraged potential to effectively visualize spatial patterns. The aim of this tutorial is to introduce psychologists to an alternative, easy-to-use mapping technique: distance-based weighting (i.e., calculating area estimates that represent distance-weighted averages of all measurement locations). We outline the basic idea of distance-based weighting and explain how to implement this technique so that it is effective for geo-psychological research. Using large-scale mental-health data from the United States (N = 2,058,249), we empirically demonstrate how distance-based weighting may complement the commonly used basic mapping technique. We provide fully annotated R code and open access to all data used in our analyses.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42188796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01Epub Date: 2022-09-07DOI: 10.1177/25152459221114279
Inbal Nahum-Shani, John J Dziak, Maureen A Walton, Walter Dempsey
Advances in mobile and wireless technologies offer tremendous opportunities for extending the reach and impact of psychological interventions and for adapting interventions to the unique and changing needs of individuals. However, insufficient engagement remains a critical barrier to the effectiveness of digital interventions. Human delivery of interventions (e.g., by clinical staff) can be more engaging but potentially more expensive and burdensome. Hence, the integration of digital and human-delivered components is critical to building effective and scalable psychological interventions. Existing experimental designs can be used to answer questions either about human-delivered components that are typically sequenced and adapted at relatively slow timescales (e.g., monthly) or about digital components that are typically sequenced and adapted at much faster timescales (e.g., daily). However, these methodologies do not accommodate sequencing and adaptation of components at multiple timescales and hence cannot be used to empirically inform the joint sequencing and adaptation of human-delivered and digital components. Here, we introduce the hybrid experimental design (HED)-a new experimental approach that can be used to answer scientific questions about building psychological interventions in which human-delivered and digital components are integrated and adapted at multiple timescales. We describe the key characteristics of HEDs (i.e., what they are), explain their scientific rationale (i.e., why they are needed), and provide guidelines for their design and corresponding data analysis (i.e., how can data arising from HEDs be used to inform effective and scalable psychological interventions).
{"title":"Hybrid Experimental Designs for Intervention Development: What, Why, and How.","authors":"Inbal Nahum-Shani, John J Dziak, Maureen A Walton, Walter Dempsey","doi":"10.1177/25152459221114279","DOIUrl":"10.1177/25152459221114279","url":null,"abstract":"<p><p>Advances in mobile and wireless technologies offer tremendous opportunities for extending the reach and impact of psychological interventions and for adapting interventions to the unique and changing needs of individuals. However, insufficient engagement remains a critical barrier to the effectiveness of digital interventions. Human delivery of interventions (e.g., by clinical staff) can be more engaging but potentially more expensive and burdensome. Hence, the integration of digital and human-delivered components is critical to building effective and scalable psychological interventions. Existing experimental designs can be used to answer questions either about human-delivered components that are typically sequenced and adapted at relatively slow timescales (e.g., monthly) or about digital components that are typically sequenced and adapted at much faster timescales (e.g., daily). However, these methodologies do not accommodate sequencing and adaptation of components at multiple timescales and hence cannot be used to empirically inform the joint sequencing and adaptation of human-delivered and digital components. Here, we introduce the hybrid experimental design (HED)-a new experimental approach that can be used to answer scientific questions about building psychological interventions in which human-delivered and digital components are integrated and adapted at multiple timescales. We describe the key characteristics of HEDs (i.e., what they are), explain their scientific rationale (i.e., why they are needed), and provide guidelines for their design and corresponding data analysis (i.e., how can data arising from HEDs be used to inform effective and scalable psychological interventions).</p>","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"5 3","pages":""},"PeriodicalIF":15.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/0c/b2/nihms-1881128.PMC10024531.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9163074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}