Pub Date : 2025-03-01Epub Date: 2025-03-13DOI: 10.1080/00273171.2024.2414479
Arianne Herrera-Bennett, Mijke Rhemtulla
Work surrounding the replicability and generalizability of network models has increased in recent years, prompting debate on whether network properties can be expected to be consistent across samples. To date, certain methodological practices may have contributed to observed inconsistencies, including use of single-item indicators and non-identical measurement tools. The current study used a resampling approach to disentangle the effects of sampling variability from scale variability when assessing network replicability in empirical data. Additionally, we explored whether consistencies in network characteristics were improved when more items were aggregated to estimate node scores, which we hypothesized should yield more representative measures of latent constructs. Overall, using different scales produced more variability in network properties than using different samples, but these discrepancies were markedly reduced with larger samples and greater node aggregation. Findings underscored the impact of aggregating items when estimating nodes: Multi-item indicators led to denser networks, higher network sensitivity, greater estimates of global strength, and greater levels of consistency in network properties (e.g., edge weights, centrality scores). Taken together, variability in network properties across samples may arise from poor measurement conditions; additionally, variability may reflect properties of the true network model and/or the measurement instrument. All data and syntax are openly available online (https://osf.io/m37q2/).
{"title":"Exploring the Effects of Sampling Variability, Scale Variability, and Node Aggregation on the Consistency of Estimated Networks.","authors":"Arianne Herrera-Bennett, Mijke Rhemtulla","doi":"10.1080/00273171.2024.2414479","DOIUrl":"10.1080/00273171.2024.2414479","url":null,"abstract":"<p><p>Work surrounding the replicability and generalizability of network models has increased in recent years, prompting debate on whether network properties can be expected to be consistent across samples. To date, certain methodological practices may have contributed to observed inconsistencies, including use of single-item indicators and non-identical measurement tools. The current study used a resampling approach to disentangle the effects of sampling variability from scale variability when assessing network replicability in empirical data. Additionally, we explored whether consistencies in network characteristics were improved when more items were aggregated to estimate node scores, which we hypothesized should yield more representative measures of latent constructs. Overall, using different scales produced more variability in network properties than using different samples, but these discrepancies were markedly reduced with larger samples and greater node aggregation. Findings underscored the impact of aggregating items when estimating nodes: Multi-item indicators led to denser networks, higher network sensitivity, greater estimates of global strength, and greater levels of consistency in network properties (e.g., edge weights, centrality scores). Taken together, variability in network properties across samples may arise from poor measurement conditions; additionally, variability may reflect properties of the true network model and/or the measurement instrument. All data and syntax are openly available online (https://osf.io/m37q2/).</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"275-295"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143626798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-10-20DOI: 10.1080/00273171.2024.2412682
Erik Sengewald, Katinka Hardt, Marie-Ann Sengewald
Among the most important merits of modern missing data techniques such as multiple imputation (MI) and full-information maximum likelihood estimation is the possibility to include additional information about the missingness process via auxiliary variables. During the past decade, the choice of auxiliary variables has been investigated under a variety of different conditions and more recent research points to the potentially biasing effect of certain auxiliary variables, particularly colliders (Thoemmes & Rose, 2014). In this article, we further extend biasing mechanisms of certain auxiliary variables considered in previous research and thereby focus on their effects on individual diagnosis based on norming, in which the whole distribution of a variable is of interest rather than average coefficients (e.g., means). For this, we first provide the theoretical underpinnings of the mechanisms under study and then provide two focused simulations that (i) directly expand on the collider scenario in Thoemmes and Rose (2014, appendix A) by considering outcomes that are relevant to norming and (ii) extend the scenarios under consideration by instrumental variable mechanisms. We illustrate the bias mechanisms for two different norming approaches and exemplify the procedures by means of an empirical example. We end by discussing limitations and implications of our research.
{"title":"A Causal View on Bias in Missing Data Imputation: The Impact of Evil Auxiliary Variables on Norming of Test Scores.","authors":"Erik Sengewald, Katinka Hardt, Marie-Ann Sengewald","doi":"10.1080/00273171.2024.2412682","DOIUrl":"10.1080/00273171.2024.2412682","url":null,"abstract":"<p><p>Among the most important merits of modern missing data techniques such as multiple imputation (MI) and full-information maximum likelihood estimation is the possibility to include additional information about the missingness process via auxiliary variables. During the past decade, the choice of auxiliary variables has been investigated under a variety of different conditions and more recent research points to the potentially biasing effect of certain auxiliary variables, particularly colliders (Thoemmes & Rose, 2014). In this article, we further extend biasing mechanisms of certain auxiliary variables considered in previous research and thereby focus on their effects on individual diagnosis based on norming, in which the whole distribution of a variable is of interest rather than average coefficients (e.g., means). For this, we first provide the theoretical underpinnings of the mechanisms under study and then provide two focused simulations that (i) directly expand on the collider scenario in Thoemmes and Rose (2014, appendix A) by considering outcomes that are relevant to norming and (ii) extend the scenarios under consideration by instrumental variable mechanisms. We illustrate the bias mechanisms for two different norming approaches and exemplify the procedures by means of an empirical example. We end by discussing limitations and implications of our research.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"258-274"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-10-29DOI: 10.1080/00273171.2024.2418515
Judith J M Rijnhart, Matthew J Valente, David P MacKinnon
Despite previous warnings against the use of the difference-in-coefficients method for estimating the indirect effect when the outcome in the mediation model is binary, the difference-in-coefficients method remains readily used in a variety of fields. The continued use of this method is presumably because of the lack of awareness that this method conflates the indirect effect estimate and non-collapsibility. In this paper, we aim to demonstrate the problems associated with the difference-in-coefficients method for estimating indirect effects for mediation models with binary outcomes. We provide a formula that decomposes the difference-in-coefficients estimate into (1) an estimate of non-collapsibility, and (2) an indirect effect estimate. We use a simulation study and an empirical data example to illustrate the impact of non-collapsibility on the difference-in-coefficients estimate of the indirect effect. Further, we demonstrate the application of several alternative methods for estimating the indirect effect, including the product-of-coefficients method and regression-based causal mediation analysis. The results emphasize the importance of choosing a method for estimating the indirect effect that is not affected by non-collapsibility.
{"title":"Why You Should Not Estimate Mediated Effects Using the Difference-in-Coefficients Method When the Outcome is Binary.","authors":"Judith J M Rijnhart, Matthew J Valente, David P MacKinnon","doi":"10.1080/00273171.2024.2418515","DOIUrl":"10.1080/00273171.2024.2418515","url":null,"abstract":"<p><p>Despite previous warnings against the use of the difference-in-coefficients method for estimating the indirect effect when the outcome in the mediation model is binary, the difference-in-coefficients method remains readily used in a variety of fields. The continued use of this method is presumably because of the lack of awareness that this method conflates the indirect effect estimate and non-collapsibility. In this paper, we aim to demonstrate the problems associated with the difference-in-coefficients method for estimating indirect effects for mediation models with binary outcomes. We provide a formula that decomposes the difference-in-coefficients estimate into (1) an estimate of non-collapsibility, and (2) an indirect effect estimate. We use a simulation study and an empirical data example to illustrate the impact of non-collapsibility on the difference-in-coefficients estimate of the indirect effect. Further, we demonstrate the application of several alternative methods for estimating the indirect effect, including the product-of-coefficients method and regression-based causal mediation analysis. The results emphasize the importance of choosing a method for estimating the indirect effect that is not affected by non-collapsibility.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"296-304"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11991894/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-11-26DOI: 10.1080/00273171.2024.2428222
Lydia G Speyer, Xinxin Zhu, Yi Yang, Denis Ribeaud, Manuel Eisner
Random-intercept cross-lagged panel models (RI-CLPMs) are increasingly used to investigate research questions focusing on how one variable at one time point affects another variable at the subsequent time point. Due to the implied temporal sequence of events in such research designs, interpretations of RI-CLPMs primarily focus on longitudinal cross-lagged paths while disregarding concurrent associations and modeling these only as residual covariances. However, this may cause biased cross-lagged effects. This may be especially so when data collected at the same time point refers to different reference timeframes, creating a temporal sequence of events for constructs measured concurrently. To examine this issue, we conducted a series of empirical analyses in which the impact of modeling or not modeling of directional within-time point associations may impact inferences drawn from RI-CLPMs using data from the longitudinal z-proso study. Results highlight that not considering directional concurrent effects may lead to biased cross-lagged effects. Thus, it is essential to carefully consider potential directional concurrent effects when choosing models to analyze directional associations between variables over time. If temporal sequences of concurrent effects cannot be clearly established, testing multiple models and drawing conclusions based on the robustness of effects across all models is recommended.
{"title":"On the Importance of Considering Concurrent Effects in Random-Intercept Cross-Lagged Panel Modelling: Example Analysis of Bullying and Internalising Problems.","authors":"Lydia G Speyer, Xinxin Zhu, Yi Yang, Denis Ribeaud, Manuel Eisner","doi":"10.1080/00273171.2024.2428222","DOIUrl":"10.1080/00273171.2024.2428222","url":null,"abstract":"<p><p>Random-intercept cross-lagged panel models (RI-CLPMs) are increasingly used to investigate research questions focusing on how one variable at one time point affects another variable at the subsequent time point. Due to the implied temporal sequence of events in such research designs, interpretations of RI-CLPMs primarily focus on longitudinal cross-lagged paths while disregarding concurrent associations and modeling these only as residual covariances. However, this may cause biased cross-lagged effects. This may be especially so when data collected at the same time point refers to different reference timeframes, creating a temporal sequence of events for constructs measured concurrently. To examine this issue, we conducted a series of empirical analyses in which the impact of modeling or not modeling of directional within-time point associations may impact inferences drawn from RI-CLPMs using data from the longitudinal z-proso study. Results highlight that not considering directional concurrent effects may lead to biased cross-lagged effects. Thus, it is essential to carefully consider potential directional concurrent effects when choosing models to analyze directional associations between variables over time. If temporal sequences of concurrent effects cannot be clearly established, testing multiple models and drawing conclusions based on the robustness of effects across all models is recommended.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"328-344"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996063/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-11-18DOI: 10.1080/00273171.2024.2424514
Alessandro Varacca
In this paper, we propose a Bayesian causal mediation approach to the analysis of experimental data when both the outcome and the mediator are measured through structured questionnaires based on Likert-scaled inquiries. Our estimation strategy builds upon the error-in-variables literature and, specifically, it leverages Item Response Theory to explicitly model the observed surrogate mediator and outcome measures. We employ their elicited latent counterparts in a simple g-computation algorithm, where we exploit the fundamental identifying assumptions of causal mediation analysis to impute all the relevant counterfactuals and estimate the causal parameters of interest. We finally devise a sensitivity analysis procedure to test the robustness of the proposed methods to the restrictive requirement of mediator's conditional ignorability. We demonstrate the functioning of our proposed methodology through an empirical application using survey data from an online experiment on food purchasing intentions and the effect of different labeling regimes.
在本文中,我们提出了一种贝叶斯因果中介方法来分析实验数据,即通过基于李克特量表调查的结构化问卷来测量结果和中介。我们的估算策略建立在变量误差文献的基础上,具体来说,它利用项目反应理论(Item Response Theory)对观察到的中介变量和结果变量进行明确建模。我们在一个简单的 g 计算算法中使用了所激发的潜在对应变量,利用因果中介分析的基本识别假设来估算所有相关的反事实,并估算相关的因果参数。最后,我们设计了一个敏感性分析程序,以检验所提出的方法对中介人条件无知这一限制性要求的稳健性。我们通过一个关于食品购买意向和不同标签制度影响的在线实验调查数据的实证应用,证明了我们提出的方法的功能。
{"title":"Latently Mediating: A Bayesian Take on Causal Mediation Analysis with Structured Survey Data.","authors":"Alessandro Varacca","doi":"10.1080/00273171.2024.2424514","DOIUrl":"10.1080/00273171.2024.2424514","url":null,"abstract":"<p><p>In this paper, we propose a Bayesian causal mediation approach to the analysis of experimental data when both the outcome and the mediator are measured through structured questionnaires based on Likert-scaled inquiries. Our estimation strategy builds upon the error-in-variables literature and, specifically, it leverages Item Response Theory to explicitly model the observed surrogate mediator and outcome measures. We employ their elicited latent counterparts in a simple g-computation algorithm, where we exploit the fundamental identifying assumptions of causal mediation analysis to impute all the relevant counterfactuals and estimate the causal parameters of interest. We finally devise a sensitivity analysis procedure to test the robustness of the proposed methods to the restrictive requirement of mediator's conditional ignorability. We demonstrate the functioning of our proposed methodology through an empirical application using survey data from an online experiment on food purchasing intentions and the effect of different labeling regimes.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"305-327"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-09-16DOI: 10.1080/00273171.2024.2395941
Dingjing Shi, Alexander P Christensen, Eric Anthony Day, Hudson F Golino, Luis Eduardo Garrido
To understand psychological data, it is crucial to examine the structure and dimensions of variables. In this study, we examined alternative estimation algorithms to the conventional GLASSO-based exploratory graph analysis (EGA) in network psychometric models to assess the dimensionality structure of the data. The study applied Bayesian conjugate or Jeffreys' priors to estimate the graphical structure and then used the Louvain community detection algorithm to partition and identify groups of nodes, which allowed the detection of the multi- and unidimensional factor structures. Monte Carlo simulations suggested that the two alternative Bayesian estimation algorithms had comparable or better performance when compared with the GLASSO-based EGA and conventional parallel analysis (PA). When estimating the multidimensional factor structure, the analytically based method (i.e., EGA.analytical) showed the best balance between accuracy and mean biased/absolute errors, with the highest accuracy tied with EGA but with the smallest errors. The sampling-based approach (EGA.sampling) yielded higher accuracy and smaller errors than PA; lower accuracy but also lower errors than EGA. Techniques from the two algorithms had more stable performance than EGA and PA across different data conditions. When estimating the unidimensional structure, the PA technique performed the best, followed closely by EGA, and then EGA.analytical and EGA.sampling. Furthermore, the study explored four full Bayesian techniques to assess dimensionality in network psychometrics. The results demonstrated superior performance when using Bayesian hypothesis testing or deriving posterior samples of graph structures under small sample sizes. The study recommends using the EGA.analytical technique as an alternative tool for assessing dimensionality and advocates for the usefulness of the EGA.sampling method as a valuable alternate technique. The findings also indicated encouraging results for extending the regularization-based network modeling EGA method to the Bayesian framework and discussed future directions in this line of work. The study illustrated the practical application of the techniques to two empirical examples in R.
要理解心理数据,研究变量的结构和维度至关重要。在本研究中,我们研究了网络心理测量模型中基于传统 GLASSO 的探索性图分析(EGA)的替代估计算法,以评估数据的维度结构。研究采用贝叶斯共轭或杰弗里斯先验来估计图结构,然后使用卢万群落检测算法来划分和识别节点群,从而检测出多维和单维因子结构。蒙特卡罗模拟表明,与基于 GLASSO 的 EGA 和传统的并行分析(PA)相比,这两种贝叶斯估计算法的性能相当或更好。在估计多维因子结构时,基于分析的方法(即 EGA.analytical)在准确性和平均偏差/绝对误差之间表现出最佳平衡,准确性与 EGA 并列最高,但误差最小。与 PA 相比,基于采样的方法(EGA.采样)精度更高,误差更小;与 EGA 相比,精度较低,但误差也较小。在不同的数据条件下,这两种算法的技术比 EGA 和 PA 具有更稳定的性能。在估计单维结构时,PA 技术表现最好,紧随其后的是 EGA,然后是 EGA.分析和 EGA.采样。此外,研究还探索了四种完整的贝叶斯技术,以评估网络心理测量学中的维度。结果表明,在样本量较小的情况下,使用贝叶斯假设检验或推导图结构的后验样本时,效果更佳。研究建议使用 EGA.分析技术作为评估维度的替代工具,并主张将 EGA.抽样方法作为一种有价值的替代技术。研究结果还表明,将基于正则化的网络建模 EGA 方法扩展到贝叶斯框架取得了令人鼓舞的成果,并讨论了这一工作领域的未来方向。该研究以 R 语言中的两个经验实例说明了这些技术的实际应用。
{"title":"Exploring Estimation Procedures for Reducing Dimensionality in Psychological Network Modeling.","authors":"Dingjing Shi, Alexander P Christensen, Eric Anthony Day, Hudson F Golino, Luis Eduardo Garrido","doi":"10.1080/00273171.2024.2395941","DOIUrl":"10.1080/00273171.2024.2395941","url":null,"abstract":"<p><p>To understand psychological data, it is crucial to examine the structure and dimensions of variables. In this study, we examined alternative estimation algorithms to the conventional GLASSO-based exploratory graph analysis (EGA) in network psychometric models to assess the dimensionality structure of the data. The study applied Bayesian conjugate or Jeffreys' priors to estimate the graphical structure and then used the Louvain community detection algorithm to partition and identify groups of nodes, which allowed the detection of the multi- and unidimensional factor structures. Monte Carlo simulations suggested that the two alternative Bayesian estimation algorithms had comparable or better performance when compared with the GLASSO-based EGA and conventional parallel analysis (PA). When estimating the multidimensional factor structure, the analytically based method (i.e., EGA.analytical) showed the best balance between accuracy and mean biased/absolute errors, with the highest accuracy tied with EGA but with the smallest errors. The sampling-based approach (EGA.sampling) yielded higher accuracy and smaller errors than PA; lower accuracy but also lower errors than EGA. Techniques from the two algorithms had more stable performance than EGA and PA across different data conditions. When estimating the unidimensional structure, the PA technique performed the best, followed closely by EGA, and then EGA.analytical and EGA.sampling. Furthermore, the study explored four full Bayesian techniques to assess dimensionality in network psychometrics. The results demonstrated superior performance when using Bayesian hypothesis testing or deriving posterior samples of graph structures under small sample sizes. The study recommends using the EGA.analytical technique as an alternative tool for assessing dimensionality and advocates for the usefulness of the EGA.sampling method as a valuable alternate technique. The findings also indicated encouraging results for extending the regularization-based network modeling EGA method to the Bayesian framework and discussed future directions in this line of work. The study illustrated the practical application of the techniques to two empirical examples in R.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"184-210"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-10-16DOI: 10.1080/00273171.2024.2410760
Justin D Kracht, Niels G Waller
Researchers simulating covariance structure models sometimes add model error to their data to produce model misfit. Presently, the most popular methods for generating error-perturbed data are those by Tucker, Koopman, and Linn (TKL), Cudeck and Browne (CB), and Wu and Browne (WB). Although all of these methods include parameters that control the degree of model misfit, none can generate data that reproduce multiple fit indices. To address this issue, we describe a multiple-target TKL method that can generate error-perturbed data that will reproduce target RMSEA and CFI values either individually or together. To evaluate this method, we simulated error-perturbed correlation matrices for an array of factor analysis models using the multiple-target TKL method, the CB method, and the WB method. Our results indicated that the multiple-target TKL method produced solutions with RMSEA and CFI values that were closer to their target values than those of the alternative methods. Thus, the multiple-target TKL method should be a useful tool for researchers who wish to generate error-perturbed correlation matrices with a known degree of model error. All functions that are described in this work are available in the fungible library. Additional materials (e.g., code, supplemental results) are available at https://osf.io/vxr8d/.
{"title":"Make Some Noise: Generating Data from Imperfect Factor Models.","authors":"Justin D Kracht, Niels G Waller","doi":"10.1080/00273171.2024.2410760","DOIUrl":"10.1080/00273171.2024.2410760","url":null,"abstract":"<p><p>Researchers simulating covariance structure models sometimes add model error to their data to produce model misfit. Presently, the most popular methods for generating error-perturbed data are those by Tucker, Koopman, and Linn (TKL), Cudeck and Browne (CB), and Wu and Browne (WB). Although all of these methods include parameters that control the degree of model misfit, none can generate data that reproduce multiple fit indices. To address this issue, we describe a multiple-target TKL method that can generate error-perturbed data that will reproduce target RMSEA and CFI values either individually or together. To evaluate this method, we simulated error-perturbed correlation matrices for an array of factor analysis models using the multiple-target TKL method, the CB method, and the WB method. Our results indicated that the multiple-target TKL method produced solutions with RMSEA and CFI values that were closer to their target values than those of the alternative methods. Thus, the multiple-target TKL method should be a useful tool for researchers who wish to generate error-perturbed correlation matrices with a known degree of model error. All functions that are described in this work are available in the fungible <math><mrow><mi>R</mi></mrow></math> library. Additional materials (e.g., <math><mrow><mi>R</mi></mrow></math> code, supplemental results) are available at https://osf.io/vxr8d/.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"236-257"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-12-23DOI: 10.1080/00273171.2024.2436406
Inhan Kang
In this article, we propose latent variable models that jointly account for responses and response times (RTs) in multidimensional personality measurements. We address two key research questions regarding the latent structure of RT distributions through model comparisons. First, we decompose RT into decision and non-decision times by incorporating irreducible minimum shifts in RT distributions, as done in cognitive decision-making models. Second, we investigate whether the speed factor underlying decision times should be multidimensional with the same latent structure as personality traits, or, if a unidimensional speed factor suffices. Comprehensive model comparisons across four distinct datasets suggest that a joint model with person-specific parameters to account for shifts in RT distributions and a unidimensional speed factor provides the best account for ordinal responses and RTs. Posterior predictive checks further confirm these findings. Additionally, simulation studies validate the parameter recovery of the proposed models and support the empirical results. Most importantly, failing to account for the irreducible minimum shift in RT distributions leads to systematic biases in other model components and severe underestimation of the nonlinear relationship between responses and RTs.
{"title":"On the Latent Structure of Responses and Response Times from Multidimensional Personality Measurement with Ordinal Rating Scales.","authors":"Inhan Kang","doi":"10.1080/00273171.2024.2436406","DOIUrl":"10.1080/00273171.2024.2436406","url":null,"abstract":"<p><p>In this article, we propose latent variable models that jointly account for responses and response times (RTs) in multidimensional personality measurements. We address two key research questions regarding the latent structure of RT distributions through model comparisons. First, we decompose RT into decision and non-decision times by incorporating irreducible minimum shifts in RT distributions, as done in cognitive decision-making models. Second, we investigate whether the speed factor underlying decision times should be multidimensional with the same latent structure as personality traits, or, if a unidimensional speed factor suffices. Comprehensive model comparisons across four distinct datasets suggest that a joint model with person-specific parameters to account for shifts in RT distributions and a unidimensional speed factor provides the best account for ordinal responses and RTs. Posterior predictive checks further confirm these findings. Additionally, simulation studies validate the parameter recovery of the proposed models and support the empirical results. Most importantly, failing to account for the irreducible minimum shift in RT distributions leads to systematic biases in other model components and severe underestimation of the nonlinear relationship between responses and RTs.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"393-422"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-12-11DOI: 10.1080/00273171.2024.2432918
Jannis Kreienkamp, Maximilian Agostini, Rei Monden, Kai Epstude, Peter de Jonge, Laura F Bringmann
Psychological researchers and practitioners collect increasingly complex time series data aimed at identifying differences between the developments of participants or patients. Past research has proposed a number of dynamic measures that describe meaningful developmental patterns for psychological data (e.g., instability, inertia, linear trend). Yet, commonly used clustering approaches are often not able to include these meaningful measures (e.g., due to model assumptions). We propose feature-based time series clustering as a flexible, transparent, and well-grounded approach that clusters participants based on the dynamic measures directly using common clustering algorithms. We introduce the approach and illustrate the utility of the method with real-world empirical data that highlight common ESM challenges of multivariate conceptualizations, structural missingness, and non-stationary trends. We use the data to showcase the main steps of input selection, feature extraction, feature reduction, feature clustering, and cluster evaluation. We also provide practical algorithm overviews and readily available code for data preparation, analysis, and interpretation.
{"title":"A Gentle Introduction and Application of Feature-Based Clustering with Psychological Time Series.","authors":"Jannis Kreienkamp, Maximilian Agostini, Rei Monden, Kai Epstude, Peter de Jonge, Laura F Bringmann","doi":"10.1080/00273171.2024.2432918","DOIUrl":"10.1080/00273171.2024.2432918","url":null,"abstract":"<p><p>Psychological researchers and practitioners collect increasingly complex time series data aimed at identifying differences between the developments of participants or patients. Past research has proposed a number of dynamic measures that describe meaningful developmental patterns for psychological data (e.g., instability, inertia, linear trend). Yet, commonly used clustering approaches are often not able to include these meaningful measures (e.g., due to model assumptions). We propose feature-based time series clustering as a flexible, transparent, and well-grounded approach that clusters participants based on the dynamic measures directly using common clustering algorithms. We introduce the approach and illustrate the utility of the method with real-world empirical data that highlight common ESM challenges of multivariate conceptualizations, structural missingness, and non-stationary trends. We use the data to showcase the main steps of input selection, feature extraction, feature reduction, feature clustering, and cluster evaluation. We also provide practical algorithm overviews and readily available code for data preparation, analysis, and interpretation.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"362-392"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-08-31DOI: 10.1080/00273171.2024.2394607
Zhaojun Li, Lingyue Li, Bo Zhang, Mengyang Cao, Louis Tay
Two research streams on responses to Likert-type items have been developing in parallel: (a) unfolding models and (b) individual response styles (RSs). To accurately understand Likert-type item responding, it is vital to parse unfolding responses from RSs. Therefore, we propose the Unfolding Item Response Tree (UIRTree) model. First, we conducted a Monte Carlo simulation study to examine the performance of the UIRTree model compared to three other models - Samejima's Graded Response Model, Generalized Graded Unfolding Model, and Dominance Item Response Tree model, for Likert-type responses. Results showed that when data followed an unfolding response process and contained RSs, AIC was able to select the UIRTree model, while BIC was biased toward the DIRTree model in many conditions. In addition, model parameters in the UIRTree model could be accurately recovered under realistic conditions, and mis-specifying item response process or wrongly ignoring RSs was detrimental to the estimation of key parameters. Then, we used datasets from empirical studies to show that the UIRTree model could fit personality datasets well and produced more reasonable parameter estimates compared to competing models. A strong presence of RS(s) was also revealed by the UIRTree model. Finally, we provided examples with R code for UIRTree model estimation to facilitate the modeling of responses to Likert-type items in future studies.
{"title":"Killing Two Birds with One Stone: Accounting for Unfolding Item Response Process and Response Styles Using Unfolding Item Response Tree Models.","authors":"Zhaojun Li, Lingyue Li, Bo Zhang, Mengyang Cao, Louis Tay","doi":"10.1080/00273171.2024.2394607","DOIUrl":"10.1080/00273171.2024.2394607","url":null,"abstract":"<p><p>Two research streams on responses to Likert-type items have been developing in parallel: (a) unfolding models and (b) individual response styles (RSs). To accurately understand Likert-type item responding, it is vital to parse unfolding responses from RSs. Therefore, we propose the Unfolding Item Response Tree (UIRTree) model. First, we conducted a Monte Carlo simulation study to examine the performance of the UIRTree model compared to three other models - Samejima's Graded Response Model, Generalized Graded Unfolding Model, and Dominance Item Response Tree model, for Likert-type responses. Results showed that when data followed an unfolding response process and contained RSs, AIC was able to select the UIRTree model, while BIC was biased toward the DIRTree model in many conditions. In addition, model parameters in the UIRTree model could be accurately recovered under realistic conditions, and mis-specifying item response process or wrongly ignoring RSs was detrimental to the estimation of key parameters. Then, we used datasets from empirical studies to show that the UIRTree model could fit personality datasets well and produced more reasonable parameter estimates compared to competing models. A strong presence of RS(s) was also revealed by the UIRTree model. Finally, we provided examples with <i>R</i> code for UIRTree model estimation to facilitate the modeling of responses to Likert-type items in future studies.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"161-183"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}