首页 > 最新文献

Psychological methods最新文献

英文 中文
Drawing credible directed acyclic graphs for causal inference. 为因果推理绘制可信有向无环图。
IF 7 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2026-03-19 DOI: 10.1037/met0000831
Nathan J Quimpo,Peter M Steiner
Causal directed acyclic graphs (DAGs) are intelligible representations of real-world data-generating processes that facilitate causal inference by providing (automatized) guidance for assessing whether a causal effect is identified with the observed data and for selecting covariates that remove most, if not all confounding bias. However, less attention has been paid to the process of constructing causal DAGs. Methodological work often relies on toy examples that have limited practical utility for applied researchers working in complex contexts. This article introduces and demonstrates a stepwise, iterative procedure for drawing credible causal DAGs, which is designed to guide researchers in identifying important sources of confounding while also incorporating research design features of quasi-experiments or randomized experiments, as well as threats to validity (e.g., measurement error, treatment noncompliance). Although constructing a complete DAG that fully captures the data-generating process is difficult and rarely achievable in practice, we argue that developing a credible DAG-one that includes all plausible sources of confounding-is adequate for applied research. The proposed iterative drawing procedure is directly aligned with the goal of constructing credible causal DAGs. (PsycInfo Database Record (c) 2026 APA, all rights reserved).
因果有向无环图(dag)是现实世界数据生成过程的可理解表示,通过提供(自动化)指导来评估因果效应是否与观察到的数据相一致,并选择协变量来消除大多数(如果不是全部)混杂偏差,从而促进因果推理。然而,对因果关系的构建过程关注较少。方法论工作通常依赖于玩具样例,这些样例对于在复杂环境中工作的应用研究人员来说具有有限的实际效用。本文介绍并演示了一种逐步迭代的方法来绘制可信的因果DAGs,旨在指导研究人员识别重要的混杂源,同时还结合准实验或随机实验的研究设计特征,以及对有效性的威胁(例如,测量误差,治疗不依从性)。尽管构建一个完全捕获数据生成过程的完整DAG是困难的,而且在实践中很少实现,但我们认为,开发一个可信的DAG——包括所有可能的混淆来源——足以用于应用研究。所提出的迭代绘制过程与构建可信因果dag的目标直接一致。(PsycInfo数据库记录(c) 2026 APA,版权所有)。
{"title":"Drawing credible directed acyclic graphs for causal inference.","authors":"Nathan J Quimpo,Peter M Steiner","doi":"10.1037/met0000831","DOIUrl":"https://doi.org/10.1037/met0000831","url":null,"abstract":"Causal directed acyclic graphs (DAGs) are intelligible representations of real-world data-generating processes that facilitate causal inference by providing (automatized) guidance for assessing whether a causal effect is identified with the observed data and for selecting covariates that remove most, if not all confounding bias. However, less attention has been paid to the process of constructing causal DAGs. Methodological work often relies on toy examples that have limited practical utility for applied researchers working in complex contexts. This article introduces and demonstrates a stepwise, iterative procedure for drawing credible causal DAGs, which is designed to guide researchers in identifying important sources of confounding while also incorporating research design features of quasi-experiments or randomized experiments, as well as threats to validity (e.g., measurement error, treatment noncompliance). Although constructing a complete DAG that fully captures the data-generating process is difficult and rarely achievable in practice, we argue that developing a credible DAG-one that includes all plausible sources of confounding-is adequate for applied research. The proposed iterative drawing procedure is directly aligned with the goal of constructing credible causal DAGs. (PsycInfo Database Record (c) 2026 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"12 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2026-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147483796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The invariance partial pruning approach to the network comparison in time-series and panel data. 时间序列和面板数据网络比较的不变性部分剪枝方法。
IF 7 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2026-03-16 DOI: 10.1037/met0000824
Xinkai Du,Sverre Urnes Johnson,Sacha Epskamp
Network models in time-series and panel data are powerful tools to investigate the dynamical relations among variables. Empirical research often seeks to compare network structures across groups/individuals to understand how element-wise associations respond differently to treatments, providing a framework to explain individual heterogeneity in treatment response and the relative efficacy of different intervention approaches. However, existing methods for comparing n = 1 idiographic networks are restricted to global tests, which cannot identify the precise location of edge heterogeneity. Furthermore, there is a lack of easily applicable methods to compare networks from panel data where just a few time points are available per person. We therefore present the invariance partial pruning (IVPP) approach, which first evaluates heterogeneity globally with the network invariance test and then determines the exact locus of heterogeneity at the edge level with partial pruning. Through simulations, we discovered that the network invariance test based on Akaike Information Criterion and Bayesian Information Criterion performed well. However, at small sample sizes, Akaike Information Criterion showed inflated false positive rates and Bayesian Information Criterion showed insufficient power to detect smaller true differences. The likelihood ratio test was prone to false discovery. Comparison with the fully constrained model revealed superior performance to the fully unconstrained model. Partial pruning successfully uncovered specific edge differences with desirable sensitivity and specificity. We conclude that IVPP is an essential supplemental to existing network methodology, enabling the comparison of networks from both time-series and panel data and testing specific edge differences. We implement the algorithm in the R package IVPP. (PsycInfo Database Record (c) 2026 APA, all rights reserved).
时间序列和面板数据中的网络模型是研究变量间动态关系的有力工具。实证研究通常试图比较群体/个人之间的网络结构,以了解元素关联对治疗的不同反应,为解释治疗反应的个体异质性和不同干预方法的相对疗效提供一个框架。然而,现有的比较n = 1个具体网络的方法仅限于全局测试,无法识别边缘异质性的精确位置。此外,在人均仅有几个时间点的情况下,缺乏容易适用的方法来从面板数据中比较网络。因此,我们提出了不变性部分剪枝(IVPP)方法,该方法首先通过网络不变性检验来评估全局异质性,然后通过部分剪枝来确定边缘水平上的确切异质性位点。通过仿真,我们发现基于赤池信息准则和贝叶斯信息准则的网络不变性检验效果良好。然而,在小样本量下,赤池信息标准显示假阳性率过高,贝叶斯信息标准在检测较小的真实差异方面显示出不足的能力。似然比检验容易出现错误发现。通过与全约束模型的比较,发现该模型的性能优于全无约束模型。部分修剪成功地揭示了特定的边缘差异,具有理想的敏感性和特异性。我们的结论是,IVPP是对现有网络方法的重要补充,可以从时间序列和面板数据中比较网络,并测试特定的边缘差异。我们在R包IVPP中实现了该算法。(PsycInfo数据库记录(c) 2026 APA,版权所有)。
{"title":"The invariance partial pruning approach to the network comparison in time-series and panel data.","authors":"Xinkai Du,Sverre Urnes Johnson,Sacha Epskamp","doi":"10.1037/met0000824","DOIUrl":"https://doi.org/10.1037/met0000824","url":null,"abstract":"Network models in time-series and panel data are powerful tools to investigate the dynamical relations among variables. Empirical research often seeks to compare network structures across groups/individuals to understand how element-wise associations respond differently to treatments, providing a framework to explain individual heterogeneity in treatment response and the relative efficacy of different intervention approaches. However, existing methods for comparing n = 1 idiographic networks are restricted to global tests, which cannot identify the precise location of edge heterogeneity. Furthermore, there is a lack of easily applicable methods to compare networks from panel data where just a few time points are available per person. We therefore present the invariance partial pruning (IVPP) approach, which first evaluates heterogeneity globally with the network invariance test and then determines the exact locus of heterogeneity at the edge level with partial pruning. Through simulations, we discovered that the network invariance test based on Akaike Information Criterion and Bayesian Information Criterion performed well. However, at small sample sizes, Akaike Information Criterion showed inflated false positive rates and Bayesian Information Criterion showed insufficient power to detect smaller true differences. The likelihood ratio test was prone to false discovery. Comparison with the fully constrained model revealed superior performance to the fully unconstrained model. Partial pruning successfully uncovered specific edge differences with desirable sensitivity and specificity. We conclude that IVPP is an essential supplemental to existing network methodology, enabling the comparison of networks from both time-series and panel data and testing specific edge differences. We implement the algorithm in the R package IVPP. (PsycInfo Database Record (c) 2026 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"97 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2026-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147471857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supplemental Material for The Invariance Partial Pruning Approach to the Network Comparison in Time-Series and Panel Data 时间序列和面板数据网络比较的不变性部分剪枝方法补充材料
IF 7 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2026-03-09 DOI: 10.1037/met0000824.supp
{"title":"Supplemental Material for The Invariance Partial Pruning Approach to the Network Comparison in Time-Series and Panel Data","authors":"","doi":"10.1037/met0000824.supp","DOIUrl":"https://doi.org/10.1037/met0000824.supp","url":null,"abstract":"","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"15 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147380545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From the 1940s to 2020s: A review of the current state of forced-choice methodology. 从20世纪40年代到21世纪20年代:对强制选择方法现状的回顾。
IF 7.8 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2026-03-02 DOI: 10.1037/met0000822
Jake Plantz, Keith D Wright, Jessica K Flake

Forced-choice measures are an alternative to rating-scale surveys designed to reduce response bias, particularly socially desirable responding, by requiring respondents to make rank-order comparisons among two or more statements at a time. Although forced-choice instruments have been used in psychological testing since at least the 1940s, recent methodological advances in item response theory modeling have enabled the estimation of normative scores from the raw ipsative data these assessments produce. The introduction of new scoring methods has resulted in an uptick in the use of forced-choice tests, as full cross-person comparisons were made possible. This paper chronicles the historical development of forced-choice instruments up to the pivotal introduction of item response models for scoring and uses that foundation to review contemporary methods for their construction and analysis. Our review of modern-day methods begins by examining approaches to constructing forced-choice blocks, including the use of mean indices, interitem agreement coefficients, and factor loadings. We then discuss the ideal-point and dominance-based item response models used to evaluate the internal structure of forced-choice assessments and compute scores, as well as methods for assessing differential item functioning. Throughout the review, we also synthesize literature on evaluating response processes, reliability, and other considerations in test construction. Finally, we discuss ongoing debates regarding the extent to which forced-choice measures effectively limit response bias, particularly when negatively keyed items are included in blocks, and conclude by outlining directions for future research. To support engagement with the historical literature, we provide an annotated bibliography spanning more than 8 decades of forced-choice research. (PsycInfo Database Record (c) 2026 APA, all rights reserved).

强制选择措施是评级量表调查的另一种选择,旨在减少反应偏差,特别是社会期望的反应,要求受访者一次在两个或多个陈述之间进行排名顺序比较。尽管至少从20世纪40年代起,强迫选择工具就被用于心理测试,但最近项目反应理论建模的方法进步,使人们能够从这些评估产生的原始被动数据中估计出规范分数。由于可以进行全面的跨人比较,采用新的计分方法导致使用强迫选择测验的情况有所增加。本文记录了强迫选择工具的历史发展,直到关键的项目反应模型的引入,并使用该基础来回顾当代方法的构建和分析。我们对现代方法的回顾首先是检查构建强制选择块的方法,包括使用平均指数、项目间协议系数和因子负载。然后,我们讨论了用于评估强迫选择评估的内部结构和计算分数的理想点和基于优势的项目反应模型,以及评估差异项目功能的方法。在整个综述中,我们还综合了评估响应过程、可靠性和测试构建中其他考虑因素的文献。最后,我们讨论了正在进行的关于强制选择措施在多大程度上有效地限制反应偏差的争论,特别是当负面关键项目被包含在块中时,并通过概述未来研究的方向来结束。为了支持与历史文献的接触,我们提供了一个超过80年的强迫选择研究的注释书目。(PsycInfo数据库记录(c) 2026 APA,版权所有)。
{"title":"From the 1940s to 2020s: A review of the current state of forced-choice methodology.","authors":"Jake Plantz, Keith D Wright, Jessica K Flake","doi":"10.1037/met0000822","DOIUrl":"https://doi.org/10.1037/met0000822","url":null,"abstract":"<p><p>Forced-choice measures are an alternative to rating-scale surveys designed to reduce response bias, particularly socially desirable responding, by requiring respondents to make rank-order comparisons among two or more statements at a time. Although forced-choice instruments have been used in psychological testing since at least the 1940s, recent methodological advances in item response theory modeling have enabled the estimation of normative scores from the raw ipsative data these assessments produce. The introduction of new scoring methods has resulted in an uptick in the use of forced-choice tests, as full cross-person comparisons were made possible. This paper chronicles the historical development of forced-choice instruments up to the pivotal introduction of item response models for scoring and uses that foundation to review contemporary methods for their construction and analysis. Our review of modern-day methods begins by examining approaches to constructing forced-choice blocks, including the use of mean indices, interitem agreement coefficients, and factor loadings. We then discuss the ideal-point and dominance-based item response models used to evaluate the internal structure of forced-choice assessments and compute scores, as well as methods for assessing differential item functioning. Throughout the review, we also synthesize literature on evaluating response processes, reliability, and other considerations in test construction. Finally, we discuss ongoing debates regarding the extent to which forced-choice measures effectively limit response bias, particularly when negatively keyed items are included in blocks, and conclude by outlining directions for future research. To support engagement with the historical literature, we provide an annotated bibliography spanning more than 8 decades of forced-choice research. (PsycInfo Database Record (c) 2026 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.8,"publicationDate":"2026-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147326892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supplemental Material for From the 1940s to 2020s: A Review of the Current State of Forced-Choice Methodology 从20世纪40年代到20世纪20年代:对强制选择方法现状的回顾
IF 7 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2026-02-26 DOI: 10.1037/met0000822.supp
{"title":"Supplemental Material for From the 1940s to 2020s: A Review of the Current State of Forced-Choice Methodology","authors":"","doi":"10.1037/met0000822.supp","DOIUrl":"https://doi.org/10.1037/met0000822.supp","url":null,"abstract":"","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"252 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2026-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147319747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nested model comparisons between common factors and composites. 共同因素和复合因素之间的嵌套模型比较。
IF 7.8 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2026-02-16 DOI: 10.1037/met0000806
Danielle Siegel, Victoria Savalei, Mijke Rhemtulla

In psychological research, a common factor model is the most popular measurement model for scale items. However, there is increasing awareness that alternative measurement models, such as formative models, may make more theoretical sense for many kinds of psychological data. We demonstrate the nesting structure of three models specified in a structural equation modeling framework: a reflective confirmatory factor analysis (CFA), a formative Henseler-Ogasawara confirmatory composite analysis, and a formative pseudo-indicator model. Unlike CFA, Henseler-Ogasawara confirmatory composite analysis and pseudo-indicator model allow for the specification of composites in the structural equation modeling framework. In this article, we establish both theoretically and empirically that these three models are nested within one another, as long as the structural part of each model is saturated. As such, the three models can be compared via a chi-square difference test and other fit indices developed for nested models. We report on the results of a small simulation to evaluate whether the chi-square difference test and the root-mean-square error of approximation (RMSEA) based on it (RMSEAD) can reliably discern whether data were sampled from a CFA or a formative measurement model, varying sample size, indicator weights, and the strength of the correlation with another concept. In two empirical examples, we illustrate how tools for nested model comparison can be used to distinguish among reflective and formative measurement models. (PsycInfo Database Record (c) 2026 APA, all rights reserved).

在心理学研究中,共同因素模型是最常用的量表项目测量模型。然而,越来越多的人意识到,替代测量模型,如形成模型,可能对许多种类的心理数据有更大的理论意义。我们展示了在结构方程建模框架中指定的三个模型的嵌套结构:反射性验证性因子分析(CFA),形成性Henseler-Ogasawara验证性复合分析和形成性伪指标模型。与CFA不同,Henseler-Ogasawara验证性复合分析和伪指标模型允许在结构方程建模框架中对复合材料进行规范。在本文中,我们从理论上和经验上建立了这三个模型是相互嵌套的,只要每个模型的结构部分是饱和的。因此,这三个模型可以通过卡方差异检验和为嵌套模型开发的其他拟合指标进行比较。我们报告了一个小型模拟的结果,以评估卡方差异检验和基于它的近似均方根误差(RMSEA) (RMSEAD)是否可以可靠地辨别数据是来自CFA还是形成性测量模型、不同的样本量、指标权重以及与另一个概念的相关性强度。在两个经验例子中,我们说明了如何使用嵌套模型比较工具来区分反射性和形成性测量模型。(PsycInfo数据库记录(c) 2026 APA,版权所有)。
{"title":"Nested model comparisons between common factors and composites.","authors":"Danielle Siegel, Victoria Savalei, Mijke Rhemtulla","doi":"10.1037/met0000806","DOIUrl":"https://doi.org/10.1037/met0000806","url":null,"abstract":"<p><p>In psychological research, a common factor model is the most popular measurement model for scale items. However, there is increasing awareness that alternative measurement models, such as formative models, may make more theoretical sense for many kinds of psychological data. We demonstrate the nesting structure of three models specified in a structural equation modeling framework: a reflective confirmatory factor analysis (CFA), a formative Henseler-Ogasawara confirmatory composite analysis, and a formative pseudo-indicator model. Unlike CFA, Henseler-Ogasawara confirmatory composite analysis and pseudo-indicator model allow for the specification of composites in the structural equation modeling framework. In this article, we establish both theoretically and empirically that these three models are nested within one another, as long as the structural part of each model is saturated. As such, the three models can be compared via a chi-square difference test and other fit indices developed for nested models. We report on the results of a small simulation to evaluate whether the chi-square difference test and the root-mean-square error of approximation (RMSEA) based on it (RMSEA<sub>D</sub>) can reliably discern whether data were sampled from a CFA or a formative measurement model, varying sample size, indicator weights, and the strength of the correlation with another concept. In two empirical examples, we illustrate how tools for nested model comparison can be used to distinguish among reflective and formative measurement models. (PsycInfo Database Record (c) 2026 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.8,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146214072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Timing of a just-in-time intervention to reduce alcohol consumption: A simulation approach to optimize decision rules. 适时干预减少酒精消耗的时机:优化决策规则的模拟方法。
IF 7.8 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2026-02-16 DOI: 10.1037/met0000810
Matthias Haucke, Dominic Reichert, Iris Reinhard, Rika Groß, Abhijit Sreepada, Ali Ghadami, Marvin Ganz, Christine Heim, Heike Tost, Ulrich W Ebner-Priemer, Shuyan Liu, Markus Reichert

The effectiveness of a just-in-time adaptive intervention relies on accurate algorithms (i.e., decision rules), that determine when and how interventions should be administered. Yet, so far, there is a lack of empirical investigations that evaluate the performance of decision rules. Simulation can be a useful tool to evaluate and refine a range of decision rules prior to implementing just-in-time adaptive interventions in real-world settings. In this study, we evaluate the performance of various decision rules using both an existing data set and a simulated data set that includes measures of craving and alcohol consumption. The tested decision rules consist of adaptive algorithms, like previous-day mean craving and online logistic regression, as well as fixed thresholds (e.g., a craving score larger than 1 on a 7-point Likert scale). For each decision rule, we generated confusion matrices and compared them across performance metrics, including accuracy, specificity, and sensitivity, as well as the number of interventions sent prior to drinking. To assess the robustness of our findings, we simulated a range of data sets with varying underlying distributions and tested the decision rule performance across these conditions. In addition, we conducted a multilevel logistic regression to identify the strongest association between the predictor and outcome variable across time lags. The presented method illustrates an approach to test and refine one's decision rules prior to launching a time-intensive, smartphone-based real-time intervention. A tutorial for conducting such simulations, as well as analysis codes, is provided online and in supplementary materials. (PsycInfo Database Record (c) 2026 APA, all rights reserved).

及时自适应干预的有效性依赖于精确的算法(即决策规则),它决定了何时以及如何实施干预。然而,到目前为止,还缺乏评估决策规则性能的实证研究。在现实环境中实施即时适应性干预之前,模拟可以是评估和完善一系列决策规则的有用工具。在本研究中,我们使用现有数据集和模拟数据集(包括渴望和酒精消耗的测量)来评估各种决策规则的性能。测试的决策规则由自适应算法组成,如前一天的平均渴望和在线逻辑回归,以及固定的阈值(例如,在7分李克特量表中渴望得分大于1)。对于每个决策规则,我们生成混淆矩阵,并通过性能指标对其进行比较,包括准确性、特异性和敏感性,以及饮酒前发送的干预次数。为了评估研究结果的稳健性,我们模拟了一系列具有不同底层分布的数据集,并在这些条件下测试了决策规则的性能。此外,我们进行了多水平逻辑回归,以确定预测因子和结果变量之间跨越时间滞后的最强关联。所提出的方法说明了在启动时间密集的基于智能手机的实时干预之前测试和完善决策规则的方法。网上和补充材料中提供了进行这种模拟的教程以及分析代码。(PsycInfo数据库记录(c) 2026 APA,版权所有)。
{"title":"Timing of a just-in-time intervention to reduce alcohol consumption: A simulation approach to optimize decision rules.","authors":"Matthias Haucke, Dominic Reichert, Iris Reinhard, Rika Groß, Abhijit Sreepada, Ali Ghadami, Marvin Ganz, Christine Heim, Heike Tost, Ulrich W Ebner-Priemer, Shuyan Liu, Markus Reichert","doi":"10.1037/met0000810","DOIUrl":"https://doi.org/10.1037/met0000810","url":null,"abstract":"<p><p>The effectiveness of a just-in-time adaptive intervention relies on accurate algorithms (i.e., decision rules), that determine when and how interventions should be administered. Yet, so far, there is a lack of empirical investigations that evaluate the performance of decision rules. Simulation can be a useful tool to evaluate and refine a range of decision rules prior to implementing just-in-time adaptive interventions in real-world settings. In this study, we evaluate the performance of various decision rules using both an existing data set and a simulated data set that includes measures of craving and alcohol consumption. The tested decision rules consist of adaptive algorithms, like previous-day mean craving and online logistic regression, as well as fixed thresholds (e.g., a craving score larger than 1 on a 7-point Likert scale). For each decision rule, we generated confusion matrices and compared them across performance metrics, including accuracy, specificity, and sensitivity, as well as the number of interventions sent prior to drinking. To assess the robustness of our findings, we simulated a range of data sets with varying underlying distributions and tested the decision rule performance across these conditions. In addition, we conducted a multilevel logistic regression to identify the strongest association between the predictor and outcome variable across time lags. The presented method illustrates an approach to test and refine one's decision rules prior to launching a time-intensive, smartphone-based real-time intervention. A tutorial for conducting such simulations, as well as analysis codes, is provided online and in supplementary materials. (PsycInfo Database Record (c) 2026 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.8,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146214103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A primer on equivalence (negligible effect) testing. 等效(可忽略效应)试验的基础。
IF 7.8 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2026-02-09 DOI: 10.1037/met0000800
Nataly Beribisky, Robert A Cribbie

Equivalence testing, also called negligible effect significance testing (NEST), is appropriate when a researcher would like to find evidence of a negligible association. However, since equivalence testing/NEST procedures are newer and considerably less popular than traditional difference-based null hypothesis significance testing, it is useful to give a gentle introduction to these methods. Accordingly, this tutorial article aims to provide an overview of NEST/equivalence testing procedures by describing the nature of the procedures, explaining when they should be used, defining what considerations should go into their application (including selecting a minimally meaningful effect size), and outlining how they may be conducted and interpreted. The tutorial article also includes examples and code in open-source software to illustrate how these procedures may be applied to real data. (PsycInfo Database Record (c) 2026 APA, all rights reserved).

等效检验,也称为可忽略效应显著性检验(NEST),当研究人员想要找到可忽略关联的证据时是合适的。然而,由于等效检验/NEST程序比传统的基于差异的零假设显著性检验更新,而且相当不受欢迎,因此对这些方法进行温和的介绍是有用的。因此,本教程旨在通过描述程序的性质、解释何时使用它们、定义应用程序应考虑的事项(包括选择最小有意义的效应大小)以及概述如何执行和解释它们来概述NEST/等效测试程序。这篇教程文章还包括开源软件中的示例和代码,以说明如何将这些过程应用于实际数据。(PsycInfo数据库记录(c) 2026 APA,版权所有)。
{"title":"A primer on equivalence (negligible effect) testing.","authors":"Nataly Beribisky, Robert A Cribbie","doi":"10.1037/met0000800","DOIUrl":"https://doi.org/10.1037/met0000800","url":null,"abstract":"<p><p>Equivalence testing, also called negligible effect significance testing (NEST), is appropriate when a researcher would like to find evidence of a negligible association. However, since equivalence testing/NEST procedures are newer and considerably less popular than traditional difference-based null hypothesis significance testing, it is useful to give a gentle introduction to these methods. Accordingly, this tutorial article aims to provide an overview of NEST/equivalence testing procedures by describing the nature of the procedures, explaining when they should be used, defining what considerations should go into their application (including selecting a minimally meaningful effect size), and outlining how they may be conducted and interpreted. The tutorial article also includes examples and code in open-source software to illustrate how these procedures may be applied to real data. (PsycInfo Database Record (c) 2026 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.8,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146143473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Planned missingness to reduce survey length: A sheep in wolf’s clothing. 有计划的失踪以缩短调查时间:披着狼皮的羊。
IF 7 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2026-02-05 DOI: 10.1037/met0000793
Charlene Zhang, Paul R. Sackett, Saron Demeke
{"title":"Planned missingness to reduce survey length: A sheep in wolf’s clothing.","authors":"Charlene Zhang, Paul R. Sackett, Saron Demeke","doi":"10.1037/met0000793","DOIUrl":"https://doi.org/10.1037/met0000793","url":null,"abstract":"","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"24 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using latent class analysis to justify a latent continuum in item development. 使用潜在类别分析来证明项目开发中的潜在连续体。
IF 7 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Pub Date : 2026-02-05 DOI: 10.1037/met0000757
Jay Verkuilen, Sydne T. McCluskey, Magdalen Beiting-Parrish, Aleksandra Kazakova, Howard T. Everson
{"title":"Using latent class analysis to justify a latent continuum in item development.","authors":"Jay Verkuilen, Sydne T. McCluskey, Magdalen Beiting-Parrish, Aleksandra Kazakova, Howard T. Everson","doi":"10.1037/met0000757","DOIUrl":"https://doi.org/10.1037/met0000757","url":null,"abstract":"","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Psychological methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1