首页 > 最新文献

Advances in Methods and Practices in Psychological Science最新文献

英文 中文
A Guide for Calculating Study-Level Statistical Power for Meta-Analyses 元分析研究水平统计能力计算指南
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2023-01-01 DOI: 10.1177/25152459221147260
Daniel S. Quintana
Meta-analysis is a popular approach in the psychological sciences for synthesizing data across studies. However, the credibility of meta-analysis outcomes depends on the evidential value of studies included in the body of evidence used for data synthesis. One important consideration for determining a study’s evidential value is the statistical power of the study’s design/statistical test combination for detecting hypothetical effect sizes of interest. Studies with a design/test combination that cannot reliably detect a wide range of effect sizes are more susceptible to questionable research practices and exaggerated effect sizes. Therefore, determining the statistical power for design/test combinations for studies included in meta-analyses can help researchers make decisions regarding confidence in the body of evidence. Because the one true population effect size is unknown when hypothesis testing, an alternative approach is to determine statistical power for a range of hypothetical effect sizes. This tutorial introduces the metameta R package and web app, which facilitates the straightforward calculation and visualization of study-level statistical power in meta-analyses for a range of hypothetical effect sizes. Readers will be shown how to reanalyze data using information typically presented in meta-analysis forest plots or tables and how to integrate the metameta package when reporting novel meta-analyses. A step-by-step companion screencast video tutorial is also provided to assist readers using the R package.
元分析是心理科学中一种流行的综合研究数据的方法。然而,荟萃分析结果的可信度取决于用于数据综合的证据体系中包含的研究的证据价值。确定研究证据价值的一个重要考虑因素是研究设计/统计测试组合的统计能力,用于检测感兴趣的假设效应大小。设计/测试组合不能可靠地检测出广泛的效应大小的研究更容易受到可疑研究实践和夸大效应大小的影响。因此,确定荟萃分析中研究的设计/测试组合的统计能力可以帮助研究人员就证据的可信度做出决定。因为在假设检验时,一个真实的群体效应大小是未知的,所以另一种方法是确定一系列假设效应大小的统计幂。本教程介绍了metameta R软件包和web应用程序,它有助于在一系列假设效应大小的荟萃分析中直接计算和可视化研究水平的统计能力。读者将了解如何使用荟萃分析森林图或表格中通常提供的信息重新分析数据,以及在报告新的荟萃分析时如何集成元数据包。还提供了一个循序渐进的伴随屏幕播放的视频教程,以帮助读者使用R包。
{"title":"A Guide for Calculating Study-Level Statistical Power for Meta-Analyses","authors":"Daniel S. Quintana","doi":"10.1177/25152459221147260","DOIUrl":"https://doi.org/10.1177/25152459221147260","url":null,"abstract":"Meta-analysis is a popular approach in the psychological sciences for synthesizing data across studies. However, the credibility of meta-analysis outcomes depends on the evidential value of studies included in the body of evidence used for data synthesis. One important consideration for determining a study’s evidential value is the statistical power of the study’s design/statistical test combination for detecting hypothetical effect sizes of interest. Studies with a design/test combination that cannot reliably detect a wide range of effect sizes are more susceptible to questionable research practices and exaggerated effect sizes. Therefore, determining the statistical power for design/test combinations for studies included in meta-analyses can help researchers make decisions regarding confidence in the body of evidence. Because the one true population effect size is unknown when hypothesis testing, an alternative approach is to determine statistical power for a range of hypothetical effect sizes. This tutorial introduces the metameta R package and web app, which facilitates the straightforward calculation and visualization of study-level statistical power in meta-analyses for a range of hypothetical effect sizes. Readers will be shown how to reanalyze data using information typically presented in meta-analysis forest plots or tables and how to integrate the metameta package when reporting novel meta-analyses. A step-by-step companion screencast video tutorial is also provided to assist readers using the R package.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41686756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Information Provision for Informed Consent Procedures in Psychological Research Under the General Data Protection Regulation: A Practical Guide 《一般数据保护条例》下心理研究中知情同意程序的信息提供:实用指南
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2023-01-01 DOI: 10.1177/25152459231151944
D. Hallinan, Franziska Boehm, Annika Külpmann, M. Elson
Psychological research often involves the collection and processing of personal data from human research participants. The European General Data Protection Regulation (GDPR) applies, as a rule, to psychological research conducted on personal data in the European Economic Area (EEA)—and even, in certain cases, to psychological research conducted on personal data outside the EEA. The GDPR elaborates requirements concerning the forms of information that should be communicated to research participants whenever personal data are collected directly from them. There is a general norm that informed consent should be obtained before psychological research involving the collection of personal data directly from research participants is conducted. The information required to be provided under the GDPR is normally communicated in the context of an informed consent procedure. There is reason to believe, however, that the information required by the GDPR may not always be provided. Our aim in this tutorial is thus to provide general practical guidance to psychological researchers allowing them to understand the forms of information that must be provided to research participants under the GDPR in informed consent procedures.
心理学研究通常涉及从人类研究参与者那里收集和处理个人数据。《欧洲通用数据保护条例》(GDPR)通常适用于在欧洲经济区(EEA)对个人数据进行的心理研究,在某些情况下,甚至适用于在EEA之外对个人数据的心理研究。GDPR详细说明了在直接从研究参与者那里收集个人数据时应传达给他们的信息形式的要求。有一个普遍的规范是,在进行涉及直接从研究参与者那里收集个人数据的心理研究之前,应获得知情同意。GDPR要求提供的信息通常在知情同意程序的背景下进行沟通。然而,有理由相信,GDPR要求的信息可能并不总是能够提供的。因此,我们在本教程中的目的是为心理研究人员提供一般的实践指导,让他们了解在知情同意程序中,根据GDPR必须向研究参与者提供的信息形式。
{"title":"Information Provision for Informed Consent Procedures in Psychological Research Under the General Data Protection Regulation: A Practical Guide","authors":"D. Hallinan, Franziska Boehm, Annika Külpmann, M. Elson","doi":"10.1177/25152459231151944","DOIUrl":"https://doi.org/10.1177/25152459231151944","url":null,"abstract":"Psychological research often involves the collection and processing of personal data from human research participants. The European General Data Protection Regulation (GDPR) applies, as a rule, to psychological research conducted on personal data in the European Economic Area (EEA)—and even, in certain cases, to psychological research conducted on personal data outside the EEA. The GDPR elaborates requirements concerning the forms of information that should be communicated to research participants whenever personal data are collected directly from them. There is a general norm that informed consent should be obtained before psychological research involving the collection of personal data directly from research participants is conducted. The information required to be provided under the GDPR is normally communicated in the context of an informed consent procedure. There is reason to believe, however, that the information required by the GDPR may not always be provided. Our aim in this tutorial is thus to provide general practical guidance to psychological researchers allowing them to understand the forms of information that must be provided to research participants under the GDPR in informed consent procedures.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44369891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low Research-Data Availability in Educational-Psychology Journals: No Indication of Effective Research-Data Policies 教育心理学期刊的低研究数据可得性:没有迹象表明有效的研究数据政策
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2023-01-01 DOI: 10.1177/25152459231156419
Mark Huff, Elke C. Bongartz
Research-data availability contributes to the transparency of the research process and the credibility of educational-psychology research and science in general. Recently, there have been many initiatives to increase the availability and quality of research data. Many research institutions have adopted research-data policies. This increased awareness might have raised the sharing of research data in empirical articles. To test this idea, we coded 1,242 publications from six educational-psychology journals and the psychological journal Cognition (as a baseline) published in 2018 and 2020. Research-data availability was low (3.85% compared with 62.74% in Cognition) but has increased from 0.32% (2018) to 7.16% (2020). However, neither the data-transparency level of the journal nor the existence of an official research-data policy on the level of the corresponding author’s institution was related to research-data availability. We discuss the consequences of these findings for institutional research-data-management processes.
研究数据的可用性有助于研究过程的透明度以及教育心理学研究和科学的可信度。最近,有许多举措来提高研究数据的可用性和质量。许多研究机构采用了研究数据政策。这种意识的提高可能提高了实证文章中研究数据的共享。为了验证这一观点,我们对2018年和2020年出版的六本教育心理学杂志和心理学杂志《认知》(作为基线)的1242篇出版物进行了编码。研究数据的可用性较低(3.85%,而认知方面为62.74%),但已从0.32%(2018年)增加到7.16%(2020年)。然而,无论是期刊的数据透明度水平,还是通讯作者所在机构层面的官方研究数据政策的存在,都与研究数据的可用性无关。我们讨论了这些发现对机构研究数据管理过程的影响。
{"title":"Low Research-Data Availability in Educational-Psychology Journals: No Indication of Effective Research-Data Policies","authors":"Mark Huff, Elke C. Bongartz","doi":"10.1177/25152459231156419","DOIUrl":"https://doi.org/10.1177/25152459231156419","url":null,"abstract":"Research-data availability contributes to the transparency of the research process and the credibility of educational-psychology research and science in general. Recently, there have been many initiatives to increase the availability and quality of research data. Many research institutions have adopted research-data policies. This increased awareness might have raised the sharing of research data in empirical articles. To test this idea, we coded 1,242 publications from six educational-psychology journals and the psychological journal Cognition (as a baseline) published in 2018 and 2020. Research-data availability was low (3.85% compared with 62.74% in Cognition) but has increased from 0.32% (2018) to 7.16% (2020). However, neither the data-transparency level of the journal nor the existence of an official research-data policy on the level of the corresponding author’s institution was related to research-data availability. We discuss the consequences of these findings for institutional research-data-management processes.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47393639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why the Cross-Lagged Panel Model Is Almost Never the Right Choice 为什么交叉滞后面板模型几乎从来都不是正确的选择
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2023-01-01 DOI: 10.1177/25152459231158378
Richard E. Lucas
The cross-lagged panel model (CLPM) is a widely used technique for examining reciprocal causal effects using longitudinal data. Critics of the CLPM have noted that by failing to account for certain person-level associations, estimates of these causal effects can be biased. Because of this, models that incorporate stable-trait components (e.g., the random-intercept CLPM) have become popular alternatives. Debates about the merits of the CLPM have continued, however, with some researchers arguing that the CLPM is more appropriate than modern alternatives for examining common psychological questions. In this article, I discuss the ways that these defenses of the CLPM fail to acknowledge well-known limitations of the model. I propose some possible sources of confusion regarding these models and provide alternative ways of thinking about the problems with the CLPM. I then show in simulated data that with realistic assumptions, the CLPM is very likely to find spurious cross-lagged effects when they do not exist and can sometimes underestimate these effects when they do exist.
交叉滞后面板模型(CLPM)是一种广泛使用的技术,用于检查相互因果效应利用纵向数据。CLPM的批评者指出,由于没有考虑到某些个人层面的关联,对这些因果关系的估计可能会有偏差。正因为如此,包含稳定特征组件的模型(例如,随机截点CLPM)已经成为流行的替代方案。然而,关于CLPM优点的争论仍在继续,一些研究人员认为CLPM比现代替代方法更适合检查常见的心理问题。在本文中,我将讨论CLPM的这些防御未能承认该模型众所周知的局限性的方式。我提出了关于这些模型的一些可能的混淆来源,并提供了考虑CLPM问题的替代方法。然后,我在模拟数据中表明,在现实的假设下,CLPM很可能发现虚假的交叉滞后效应,当它们不存在时,有时会低估这些效应,当它们存在时。
{"title":"Why the Cross-Lagged Panel Model Is Almost Never the Right Choice","authors":"Richard E. Lucas","doi":"10.1177/25152459231158378","DOIUrl":"https://doi.org/10.1177/25152459231158378","url":null,"abstract":"The cross-lagged panel model (CLPM) is a widely used technique for examining reciprocal causal effects using longitudinal data. Critics of the CLPM have noted that by failing to account for certain person-level associations, estimates of these causal effects can be biased. Because of this, models that incorporate stable-trait components (e.g., the random-intercept CLPM) have become popular alternatives. Debates about the merits of the CLPM have continued, however, with some researchers arguing that the CLPM is more appropriate than modern alternatives for examining common psychological questions. In this article, I discuss the ways that these defenses of the CLPM fail to acknowledge well-known limitations of the model. I propose some possible sources of confusion regarding these models and provide alternative ways of thinking about the problems with the CLPM. I then show in simulated data that with realistic assumptions, the CLPM is very likely to find spurious cross-lagged effects when they do not exist and can sometimes underestimate these effects when they do exist.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46718878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Evaluating Implementation of the Transparency and Openness Promotion Guidelines: Reliability of Instruments to Assess Journal Policies, Procedures, and Practices 评估透明度和开放性促进指南的实施情况:评估期刊政策、程序和实践的工具的可靠性
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2023-01-01 DOI: 10.1177/25152459221149735
S. Kianersi, S. Grant, Kevin Naaman, B. Henschel, D. Mellor, S. Apte, J. Deyoe, P. Eze, Cuiqiong Huo, Bethany L. Lavender, Nicha Taschanchai, Xinlu Zhang, E. Mayo-Wilson
The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor quantifies the extent to which journals adopt TOP in their policies, but there is no validated instrument to assess TOP implementation. Moreover, raters might assess the same policies differently. Instruments with objective questions are needed to assess TOP implementation reliably. In this study, we examined the interrater reliability and agreement of three new instruments for assessing TOP implementation in journal policies (instructions to authors), procedures (manuscript-submission systems), and practices (journal articles). Independent raters used these instruments to assess 339 journals from the behavioral, social, and health sciences. We calculated interrater agreement (IRA) and interrater reliability (IRR) for each of 10 TOP standards and for each question in our instruments (13 policy questions, 26 procedure questions, 14 practice questions). IRA was high for each standard in TOP; however, IRA might have been high by chance because most standards were not implemented by most journals. No standard had “excellent” IRR. Three standards had “good,” one had “moderate,” and six had “poor” IRR. Likewise, IRA was high for most instrument questions, and IRR was moderate or worse for 62%, 54%, and 43% of policy, procedure, and practice questions, respectively. Although results might be explained by limitations in our process, instruments, and team, we are unaware of better methods for assessing TOP implementation. Clarifying distinctions among different levels of implementation for each TOP standard might improve its implementation and assessment (study protocol: https://doi.org/10.1186/s41073-021-00112-8).
《透明度和开放性促进(TOP)指南》描述了期刊可以采用的模块化标准,以促进开放科学。TOP因子量化了期刊在其政策中采用TOP的程度,但没有有效的工具来评估TOP的实施。此外,评级机构可能会对同样的政策做出不同的评估。需要带有客观问题的工具来可靠地评估TOP的实施情况。在这项研究中,我们检查了三种评估TOP在期刊政策(对作者的指导)、程序(手稿提交系统)和实践(期刊文章)中实施的新工具的相互可靠性和一致性。独立评分者使用这些工具评估了来自行为科学、社会科学和健康科学的339种期刊。我们计算了10个TOP标准中的每一个标准和我们工具中的每个问题(13个政策问题,26个程序问题,14个实践问题)的相互一致性(IRA)和相互可靠性(IRR)。TOP中各标准的IRA均较高;然而,IRA的高可能是偶然的,因为大多数标准并没有被大多数期刊实施。没有一个标准的内部收益率是“优秀”的。三个标准的IRR为“好”,一个为“中等”,六个为“差”。同样,大多数工具问题的IRR较高,政策、程序和实践问题的IRR分别为62%、54%和43%,中等或更差。虽然结果可以用我们的过程、工具和团队的局限性来解释,但我们不知道评估TOP实现的更好方法。澄清每个TOP标准的不同实施水平之间的区别可能会改善其实施和评估(研究方案:https://doi.org/10.1186/s41073-021-00112-8)。
{"title":"Evaluating Implementation of the Transparency and Openness Promotion Guidelines: Reliability of Instruments to Assess Journal Policies, Procedures, and Practices","authors":"S. Kianersi, S. Grant, Kevin Naaman, B. Henschel, D. Mellor, S. Apte, J. Deyoe, P. Eze, Cuiqiong Huo, Bethany L. Lavender, Nicha Taschanchai, Xinlu Zhang, E. Mayo-Wilson","doi":"10.1177/25152459221149735","DOIUrl":"https://doi.org/10.1177/25152459221149735","url":null,"abstract":"The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor quantifies the extent to which journals adopt TOP in their policies, but there is no validated instrument to assess TOP implementation. Moreover, raters might assess the same policies differently. Instruments with objective questions are needed to assess TOP implementation reliably. In this study, we examined the interrater reliability and agreement of three new instruments for assessing TOP implementation in journal policies (instructions to authors), procedures (manuscript-submission systems), and practices (journal articles). Independent raters used these instruments to assess 339 journals from the behavioral, social, and health sciences. We calculated interrater agreement (IRA) and interrater reliability (IRR) for each of 10 TOP standards and for each question in our instruments (13 policy questions, 26 procedure questions, 14 practice questions). IRA was high for each standard in TOP; however, IRA might have been high by chance because most standards were not implemented by most journals. No standard had “excellent” IRR. Three standards had “good,” one had “moderate,” and six had “poor” IRR. Likewise, IRA was high for most instrument questions, and IRR was moderate or worse for 62%, 54%, and 43% of policy, procedure, and practice questions, respectively. Although results might be explained by limitations in our process, instruments, and team, we are unaware of better methods for assessing TOP implementation. Clarifying distinctions among different levels of implementation for each TOP standard might improve its implementation and assessment (study protocol: https://doi.org/10.1186/s41073-021-00112-8).","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43251251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Beyond Random Effects: When Small-Study Findings Are More Heterogeneous 超越随机效应:当小型研究结果更具异质性
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2022-10-01 DOI: 10.1177/25152459221120427
T. Stanley, Hristos Doucouliagos, J. Ioannidis
New meta-regression methods are introduced that identify whether the magnitude of heterogeneity across study findings is correlated with their standard errors. Evidence from dozens of meta-analyses finds robust evidence of this correlation and that small-sample studies typically have higher heterogeneity. This correlated heterogeneity violates the random-effects (RE) model of additive and independent heterogeneity. When small studies not only have inadequate statistical power but also high heterogeneity, their scientific contribution is even more dubious. When the heterogeneity variance is correlated with the sampling-error variance to the degree we find, simulations show that RE is dominated by an alternative weighted average, the unrestricted weighted least squares (UWLS). Meta-research evidence combined with simulations establish that UWLS should replace RE as the conventional meta-analysis summary of psychological research.
引入了新的元回归方法来确定研究结果的异质性程度是否与其标准误差相关。来自数十项荟萃分析的证据发现了这种相关性的有力证据,并且小样本研究通常具有更高的异质性。这种相关异质性违反了可加性和独立异质性的随机效应(RE)模型。当小型研究不仅统计能力不足,而且异质性较高时,它们的科学贡献就更值得怀疑了。当异质性方差与抽样误差方差的相关性达到我们所发现的程度时,模拟表明,RE被一种替代加权平均值,即无限制加权最小二乘(UWLS)所主导。meta研究证据与模拟相结合,表明UWLS应该取代RE作为传统的心理学研究的meta分析总结。
{"title":"Beyond Random Effects: When Small-Study Findings Are More Heterogeneous","authors":"T. Stanley, Hristos Doucouliagos, J. Ioannidis","doi":"10.1177/25152459221120427","DOIUrl":"https://doi.org/10.1177/25152459221120427","url":null,"abstract":"New meta-regression methods are introduced that identify whether the magnitude of heterogeneity across study findings is correlated with their standard errors. Evidence from dozens of meta-analyses finds robust evidence of this correlation and that small-sample studies typically have higher heterogeneity. This correlated heterogeneity violates the random-effects (RE) model of additive and independent heterogeneity. When small studies not only have inadequate statistical power but also high heterogeneity, their scientific contribution is even more dubious. When the heterogeneity variance is correlated with the sampling-error variance to the degree we find, simulations show that RE is dominated by an alternative weighted average, the unrestricted weighted least squares (UWLS). Meta-research evidence combined with simulations establish that UWLS should replace RE as the conventional meta-analysis summary of psychological research.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43809289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Journal N-Pact Factors From 2011 to 2019: Evaluating the Quality of Social/Personality Journals With Respect to Sample Size and Statistical Power 2011 - 2019年期刊N-Pact因子:基于样本量和统计力的社会/人格期刊质量评估
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2022-10-01 DOI: 10.1177/25152459221120217
R. C. Fraley, Jia Y. Chong, Kyle A. Baacke, A. Greco, Hanxiong Guan, S. Vazire
Scholars and institutions commonly use impact factors to evaluate the quality of empirical research. However, a number of findings published in journals with high impact factors have failed to replicate, suggesting that impact alone may not be an accurate indicator of quality. Fraley and Vazire proposed an alternative index, the N-pact factor, which indexes the median sample size of published studies, providing a narrow but relevant indicator of research quality. In the present research, we expand on the original report by examining the N-pact factor of social/personality-psychology journals between 2011 and 2019, incorporating additional journals and accounting for study design (i.e., between persons, repeated measures, and mixed). There was substantial variation in the sample sizes used in studies published in different journals. Journals that emphasized personality processes and individual differences had larger N-pact factors than journals that emphasized social-psychological processes. Moreover, N-pact factors were largely independent of traditional markers of impact. Although the majority of journals in 2011 published studies that were not well powered to detect an effect of ρ = .20, this situation had improved considerably by 2019. In 2019, eight of the nine journals we sampled published studies that were, on average, powered at 80% or higher to detect such an effect. After decades of unheeded warnings from methodologists about the dangers of small-sample designs, the field of social/personality psychology has begun to use larger samples. We hope the N-pact factor will be supplemented by other indices that can be used as alternatives to improve further the evaluation of research.
学者和机构通常使用影响因素来评估实证研究的质量。然而,在具有高影响因素的期刊上发表的一些研究结果未能复制,这表明仅凭影响可能不是质量的准确指标。Fraley和Vazire提出了一种替代指数,即N-pact因子,该指数反映了已发表研究的样本量中值,提供了一个狭窄但相关的研究质量指标。在本研究中,我们在原始报告的基础上,通过检查2011年至2019年间社会/人格心理学期刊的N-pact因素,纳入了额外的期刊,并考虑了研究设计(即人与人之间、重复测量和混合)。在不同期刊上发表的研究中使用的样本量存在很大差异。强调人格过程和个体差异的期刊比强调社会心理过程的期刊具有更大的N-pact因素。此外,N-pact因素在很大程度上独立于传统的影响标记。尽管2011年大多数期刊发表的研究都无法很好地检测ρ=.20的影响,但到2019年,这种情况已经有了很大改善。2019年,我们抽样的九种期刊中,有八种发表了研究,这些研究的平均功率为80%或更高,可以检测到这种影响。几十年来,方法学家对小样本设计的危险性提出了警告,但人们对此置若罔闻,社会/人格心理学领域开始使用更大的样本。我们希望N-pact因子将得到其他指数的补充,这些指数可作为替代品,以进一步改进对研究的评估。
{"title":"Journal N-Pact Factors From 2011 to 2019: Evaluating the Quality of Social/Personality Journals With Respect to Sample Size and Statistical Power","authors":"R. C. Fraley, Jia Y. Chong, Kyle A. Baacke, A. Greco, Hanxiong Guan, S. Vazire","doi":"10.1177/25152459221120217","DOIUrl":"https://doi.org/10.1177/25152459221120217","url":null,"abstract":"Scholars and institutions commonly use impact factors to evaluate the quality of empirical research. However, a number of findings published in journals with high impact factors have failed to replicate, suggesting that impact alone may not be an accurate indicator of quality. Fraley and Vazire proposed an alternative index, the N-pact factor, which indexes the median sample size of published studies, providing a narrow but relevant indicator of research quality. In the present research, we expand on the original report by examining the N-pact factor of social/personality-psychology journals between 2011 and 2019, incorporating additional journals and accounting for study design (i.e., between persons, repeated measures, and mixed). There was substantial variation in the sample sizes used in studies published in different journals. Journals that emphasized personality processes and individual differences had larger N-pact factors than journals that emphasized social-psychological processes. Moreover, N-pact factors were largely independent of traditional markers of impact. Although the majority of journals in 2011 published studies that were not well powered to detect an effect of ρ = .20, this situation had improved considerably by 2019. In 2019, eight of the nine journals we sampled published studies that were, on average, powered at 80% or higher to detect such an effect. After decades of unheeded warnings from methodologists about the dangers of small-sample designs, the field of social/personality psychology has begun to use larger samples. We hope the N-pact factor will be supplemented by other indices that can be used as alternatives to improve further the evaluation of research.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48561207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Adjusting for Publication Bias in JASP and R: Selection Models, PET-PEESE, and Robust Bayesian Meta-Analysis 调整JASP和R的发表偏倚:选择模型、PET-PEESE和稳健贝叶斯元分析
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2022-07-01 DOI: 10.1177/25152459221109259
František Bartoš, Maximilian Maier, Daniel S. Quintana, E. Wagenmakers
Meta-analyses are essential for cumulative science, but their validity can be compromised by publication bias. To mitigate the impact of publication bias, one may apply publication-bias-adjustment techniques such as precision-effect test and precision-effect estimate with standard errors (PET-PEESE) and selection models. These methods, implemented in JASP and R, allow researchers without programming experience to conduct state-of-the-art publication-bias-adjusted meta-analysis. In this tutorial, we demonstrate how to conduct a publication-bias-adjusted meta-analysis in JASP and R and interpret the results. First, we explain two frequentist bias-correction methods: PET-PEESE and selection models. Second, we introduce robust Bayesian meta-analysis, a Bayesian approach that simultaneously considers both PET-PEESE and selection models. We illustrate the methodology on an example data set, provide an instructional video (https://bit.ly/pubbias) and an R-markdown script (https://osf.io/uhaew/), and discuss the interpretation of the results. Finally, we include concrete guidance on reporting the meta-analytic results in an academic article.
荟萃分析对累积科学至关重要,但其有效性可能会因发表偏见而受损。为了减轻出版物偏倚的影响,可以应用出版物偏倚调整技术,如精度效应检验和标准误差精度效应估计(PET-PEESE)和选择模型。这些方法在JASP和R中实施,使没有编程经验的研究人员能够进行最先进的出版物偏差调整荟萃分析。在本教程中,我们演示了如何在JASP和R中进行发表偏差调整的荟萃分析,并解释结果。首先,我们解释了两种频率偏误校正方法:PET-PEESE和选择模型。其次,我们引入了稳健的贝叶斯元分析,这是一种同时考虑PET-PEESE和选择模型的贝叶斯方法。我们在一个示例数据集上说明了该方法,并提供了一个教学视频(https://bit.ly/pubbias)和R标记脚本(https://osf.io/uhaew/),并讨论了对结果的解释。最后,我们在一篇学术文章中提供了关于报告元分析结果的具体指导。
{"title":"Adjusting for Publication Bias in JASP and R: Selection Models, PET-PEESE, and Robust Bayesian Meta-Analysis","authors":"František Bartoš, Maximilian Maier, Daniel S. Quintana, E. Wagenmakers","doi":"10.1177/25152459221109259","DOIUrl":"https://doi.org/10.1177/25152459221109259","url":null,"abstract":"Meta-analyses are essential for cumulative science, but their validity can be compromised by publication bias. To mitigate the impact of publication bias, one may apply publication-bias-adjustment techniques such as precision-effect test and precision-effect estimate with standard errors (PET-PEESE) and selection models. These methods, implemented in JASP and R, allow researchers without programming experience to conduct state-of-the-art publication-bias-adjusted meta-analysis. In this tutorial, we demonstrate how to conduct a publication-bias-adjusted meta-analysis in JASP and R and interpret the results. First, we explain two frequentist bias-correction methods: PET-PEESE and selection models. Second, we introduce robust Bayesian meta-analysis, a Bayesian approach that simultaneously considers both PET-PEESE and selection models. We illustrate the methodology on an example data set, provide an instructional video (https://bit.ly/pubbias) and an R-markdown script (https://osf.io/uhaew/), and discuss the interpretation of the results. Finally, we include concrete guidance on reporting the meta-analytic results in an academic article.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"5 1","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41756707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Effective Maps, Easily Done: Visualizing Geo-Psychological Differences Using Distance Weights 有效的地图,容易完成:可视化使用距离权重的地理心理差异
IF 13.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2022-07-01 DOI: 10.1177/25152459221101816
Tobias Ebert, Lars Mewes, F. Götz, Thomas Brenner
Psychologists of many subfields are becoming increasingly interested in the geographical distribution of psychological phenomena. An integral part of this new stream of geo-psychological studies is to visualize spatial distributions of psychological phenomena in maps. However, most psychologists are not trained in visualizing spatial data. As a result, almost all existing geo-psychological studies rely on the most basic mapping technique: color-coding disaggregated data (i.e., grouping individuals into predefined spatial units and then mapping out average scores across these spatial units). Although this basic mapping technique is not wrong, it often leaves unleveraged potential to effectively visualize spatial patterns. The aim of this tutorial is to introduce psychologists to an alternative, easy-to-use mapping technique: distance-based weighting (i.e., calculating area estimates that represent distance-weighted averages of all measurement locations). We outline the basic idea of distance-based weighting and explain how to implement this technique so that it is effective for geo-psychological research. Using large-scale mental-health data from the United States (N = 2,058,249), we empirically demonstrate how distance-based weighting may complement the commonly used basic mapping technique. We provide fully annotated R code and open access to all data used in our analyses.
许多分支领域的心理学家对心理现象的地理分布越来越感兴趣。这一新的地理心理学研究潮流的一个组成部分是在地图上可视化心理现象的空间分布。然而,大多数心理学家都没有受过可视化空间数据的训练。因此,几乎所有现有的地理心理学研究都依赖于最基本的制图技术:对分解数据进行颜色编码(即,将个体分组到预定义的空间单元中,然后绘制出这些空间单元的平均得分)。尽管这种基本的映射技术并没有错,但它往往没有充分利用有效可视化空间模式的潜力。本教程的目的是向心理学家介绍另一种易于使用的绘图技术:基于距离的加权(即,计算代表所有测量位置的距离加权平均值的面积估计)。我们概述了基于距离的加权的基本思想,并解释了如何实现这一技术,使其有效地用于地质心理学研究。利用来自美国的大规模心理健康数据(N = 2,058249),我们从经验上证明了基于距离的加权如何补充常用的基本映射技术。我们提供了完整的注释R代码,并对我们分析中使用的所有数据开放访问。
{"title":"Effective Maps, Easily Done: Visualizing Geo-Psychological Differences Using Distance Weights","authors":"Tobias Ebert, Lars Mewes, F. Götz, Thomas Brenner","doi":"10.1177/25152459221101816","DOIUrl":"https://doi.org/10.1177/25152459221101816","url":null,"abstract":"Psychologists of many subfields are becoming increasingly interested in the geographical distribution of psychological phenomena. An integral part of this new stream of geo-psychological studies is to visualize spatial distributions of psychological phenomena in maps. However, most psychologists are not trained in visualizing spatial data. As a result, almost all existing geo-psychological studies rely on the most basic mapping technique: color-coding disaggregated data (i.e., grouping individuals into predefined spatial units and then mapping out average scores across these spatial units). Although this basic mapping technique is not wrong, it often leaves unleveraged potential to effectively visualize spatial patterns. The aim of this tutorial is to introduce psychologists to an alternative, easy-to-use mapping technique: distance-based weighting (i.e., calculating area estimates that represent distance-weighted averages of all measurement locations). We outline the basic idea of distance-based weighting and explain how to implement this technique so that it is effective for geo-psychological research. Using large-scale mental-health data from the United States (N = 2,058,249), we empirically demonstrate how distance-based weighting may complement the commonly used basic mapping technique. We provide fully annotated R code and open access to all data used in our analyses.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42188796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Experimental Designs for Intervention Development: What, Why, and How. 干预发展的混合实验设计:什么、为什么和如何。
IF 15.6 1区 心理学 Q1 PSYCHOLOGY Pub Date : 2022-07-01 Epub Date: 2022-09-07 DOI: 10.1177/25152459221114279
Inbal Nahum-Shani, John J Dziak, Maureen A Walton, Walter Dempsey

Advances in mobile and wireless technologies offer tremendous opportunities for extending the reach and impact of psychological interventions and for adapting interventions to the unique and changing needs of individuals. However, insufficient engagement remains a critical barrier to the effectiveness of digital interventions. Human delivery of interventions (e.g., by clinical staff) can be more engaging but potentially more expensive and burdensome. Hence, the integration of digital and human-delivered components is critical to building effective and scalable psychological interventions. Existing experimental designs can be used to answer questions either about human-delivered components that are typically sequenced and adapted at relatively slow timescales (e.g., monthly) or about digital components that are typically sequenced and adapted at much faster timescales (e.g., daily). However, these methodologies do not accommodate sequencing and adaptation of components at multiple timescales and hence cannot be used to empirically inform the joint sequencing and adaptation of human-delivered and digital components. Here, we introduce the hybrid experimental design (HED)-a new experimental approach that can be used to answer scientific questions about building psychological interventions in which human-delivered and digital components are integrated and adapted at multiple timescales. We describe the key characteristics of HEDs (i.e., what they are), explain their scientific rationale (i.e., why they are needed), and provide guidelines for their design and corresponding data analysis (i.e., how can data arising from HEDs be used to inform effective and scalable psychological interventions).

移动和无线技术的进步为扩大心理干预的范围和影响以及使干预适应个人独特和不断变化的需求提供了巨大的机会。然而,参与不足仍然是数字干预有效性的一个关键障碍。人工干预(例如,由临床工作人员)可能更具吸引力,但可能更昂贵和负担更重。因此,整合数字和人工提供的组成部分对于建立有效和可扩展的心理干预措施至关重要。现有的实验设计可以用于回答关于通常在相对较慢的时间尺度(例如,每月)下测序和调整的人工递送组件的问题,或者关于通常在更快的时间尺度上(例如,每天)下测序并调整的数字组件的问题。然而,这些方法不适合在多个时间尺度上对组件进行测序和调整,因此不能用于根据经验为人工交付和数字组件的联合测序和调整提供信息。在这里,我们介绍了混合实验设计(HED),这是一种新的实验方法,可用于回答有关建立心理干预的科学问题,在这种干预中,人类提供的和数字组件在多个时间尺度上进行集成和调整。我们描述了HED的关键特征(即它们是什么),解释了它们的科学原理(即为什么需要它们),并为它们的设计和相应的数据分析提供了指导方针(即如何使用HED产生的数据来为有效和可扩展的心理干预提供信息)。
{"title":"Hybrid Experimental Designs for Intervention Development: What, Why, and How.","authors":"Inbal Nahum-Shani, John J Dziak, Maureen A Walton, Walter Dempsey","doi":"10.1177/25152459221114279","DOIUrl":"10.1177/25152459221114279","url":null,"abstract":"<p><p>Advances in mobile and wireless technologies offer tremendous opportunities for extending the reach and impact of psychological interventions and for adapting interventions to the unique and changing needs of individuals. However, insufficient engagement remains a critical barrier to the effectiveness of digital interventions. Human delivery of interventions (e.g., by clinical staff) can be more engaging but potentially more expensive and burdensome. Hence, the integration of digital and human-delivered components is critical to building effective and scalable psychological interventions. Existing experimental designs can be used to answer questions either about human-delivered components that are typically sequenced and adapted at relatively slow timescales (e.g., monthly) or about digital components that are typically sequenced and adapted at much faster timescales (e.g., daily). However, these methodologies do not accommodate sequencing and adaptation of components at multiple timescales and hence cannot be used to empirically inform the joint sequencing and adaptation of human-delivered and digital components. Here, we introduce the hybrid experimental design (HED)-a new experimental approach that can be used to answer scientific questions about building psychological interventions in which human-delivered and digital components are integrated and adapted at multiple timescales. We describe the key characteristics of HEDs (i.e., what they are), explain their scientific rationale (i.e., why they are needed), and provide guidelines for their design and corresponding data analysis (i.e., how can data arising from HEDs be used to inform effective and scalable psychological interventions).</p>","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"5 3","pages":""},"PeriodicalIF":15.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/0c/b2/nihms-1881128.PMC10024531.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9163074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Advances in Methods and Practices in Psychological Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1