Pub Date : 2024-09-01Epub Date: 2024-08-04DOI: 10.1080/00273171.2024.2347960
Julian D Karch, Andres F Perez-Alonso, Wicher P Bergsma
When examining whether two continuous variables are associated, tests based on Pearson's, Kendall's, and Spearman's correlation coefficients are typically used. This paper explores modern nonparametric independence tests as an alternative, which, unlike traditional tests, have the ability to potentially detect any type of relationship. In addition to existing modern nonparametric independence tests, we developed and considered two novel variants of existing tests, most notably the Heller-Heller-Gorfine-Pearson (HHG-Pearson) test. We conducted a simulation study to compare traditional independence tests, such as Pearson's correlation, and the modern nonparametric independence tests in situations commonly encountered in psychological research. As expected, no test had the highest power across all relationships. However, the distance correlation and the HHG-Pearson tests were found to have substantially greater power than all traditional tests for many relationships and only slightly less power in the worst case. A similar pattern was found in favor of the HHG-Pearson test compared to the distance correlation test. However, given that distance correlation performed better for linear relationships and is more widely accepted, we suggest considering its use in place or additional to traditional methods when there is no prior knowledge of the relationship type, as is often the case in psychological research.
{"title":"Beyond Pearson's Correlation: Modern Nonparametric Independence Tests for Psychological Research.","authors":"Julian D Karch, Andres F Perez-Alonso, Wicher P Bergsma","doi":"10.1080/00273171.2024.2347960","DOIUrl":"10.1080/00273171.2024.2347960","url":null,"abstract":"<p><p>When examining whether two continuous variables are associated, tests based on Pearson's, Kendall's, and Spearman's correlation coefficients are typically used. This paper explores modern nonparametric independence tests as an alternative, which, unlike traditional tests, have the ability to potentially detect any type of relationship. In addition to existing modern nonparametric independence tests, we developed and considered two novel variants of existing tests, most notably the Heller-Heller-Gorfine-Pearson (HHG-Pearson) test. We conducted a simulation study to compare traditional independence tests, such as Pearson's correlation, and the modern nonparametric independence tests in situations commonly encountered in psychological research. As expected, no test had the highest power across all relationships. However, the distance correlation and the HHG-Pearson tests were found to have substantially greater power than all traditional tests for many relationships and only slightly less power in the worst case. A similar pattern was found in favor of the HHG-Pearson test compared to the distance correlation test. However, given that distance correlation performed better for linear relationships and is more widely accepted, we suggest considering its use in place or additional to traditional methods when there is no prior knowledge of the relationship type, as is often the case in psychological research.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"957-977"},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-05-23DOI: 10.1080/00273171.2024.2350236
Yue Liu, Kit-Tai Hau, Hongyun Liu
Linear mixed-effects models have been increasingly used to analyze dependent data in psychological research. Despite their many advantages over ANOVA, critical issues in their analyses remain. Due to increasing random effects and model complexity, estimation computation is demanding, and convergence becomes challenging. Applied users need help choosing appropriate methods to estimate random effects. The present Monte Carlo simulation study investigated the impacts when the restricted maximum likelihood (REML) and Bayesian estimation models were misspecified in the estimation. We also compared the performance of Akaike information criterion (AIC) and deviance information criterion (DIC) in model selection. Results showed that models neglecting the existing random effects had inflated Type I errors, unacceptable coverage, and inaccurate R-squared measures of fixed and random effects variation. Furthermore, models with redundant random effects had convergence problems, lower statistical power, and inaccurate R-squared measures for Bayesian estimation. The convergence problem is more severe for REML, while reduced power and inaccurate R-squared measures were more severe for Bayesian estimation. Notably, DIC was better than AIC in identifying the true models (especially for models including person random intercept only), improving convergence rates, and providing more accurate effect size estimates, despite AIC having higher power than DIC with 10 items and the most complicated true model.
线性混合效应模型越来越多地被用于分析心理学研究中的因果数据。尽管与方差分析相比,线性混合效应模型有很多优点,但其分析中的关键问题依然存在。由于随机效应和模型复杂性的增加,估计计算的要求很高,收敛性也变得具有挑战性。应用者需要帮助选择适当的方法来估计随机效应。本蒙特卡罗模拟研究调查了估计过程中限制性最大似然法(REML)和贝叶斯估计模型被错误指定时的影响。我们还比较了 Akaike 信息准则(AIC)和偏差信息准则(DIC)在模型选择中的表现。结果表明,忽略现有随机效应的模型会导致 I 类误差增大、覆盖率不可接受、固定效应和随机效应变异的 R 平方测量不准确。此外,具有冗余随机效应的模型存在收敛问题,统计能力较低,贝叶斯估计的 R 平方测量不准确。REML 的收敛问题更为严重,而贝叶斯估计的统计量降低和 R 平方不准确的情况更为严重。值得注意的是,尽管在 10 个项目和最复杂的真实模型中,AIC 比 DIC 具有更高的功率,但 DIC 在识别真实模型(尤其是仅包括人的随机截距的模型)、提高收敛率和提供更准确的效应大小估计方面优于 AIC。
{"title":"Linear Mixed-Effects Models for Dependent Data: Power and Accuracy in Parameter Estimation.","authors":"Yue Liu, Kit-Tai Hau, Hongyun Liu","doi":"10.1080/00273171.2024.2350236","DOIUrl":"10.1080/00273171.2024.2350236","url":null,"abstract":"<p><p>Linear mixed-effects models have been increasingly used to analyze dependent data in psychological research. Despite their many advantages over ANOVA, critical issues in their analyses remain. Due to increasing random effects and model complexity, estimation computation is demanding, and convergence becomes challenging. Applied users need help choosing appropriate methods to estimate random effects. The present Monte Carlo simulation study investigated the impacts when the restricted maximum likelihood (REML) and Bayesian estimation models were misspecified in the estimation. We also compared the performance of Akaike information criterion (AIC) and deviance information criterion (DIC) in model selection. Results showed that models neglecting the existing random effects had inflated Type I errors, unacceptable coverage, and inaccurate <i>R</i>-squared measures of fixed and random effects variation. Furthermore, models with redundant random effects had convergence problems, lower statistical power, and inaccurate <i>R</i>-squared measures for Bayesian estimation. The convergence problem is more severe for REML, while reduced power and inaccurate <i>R</i>-squared measures were more severe for Bayesian estimation. Notably, DIC was better than AIC in identifying the true models (especially for models including person random intercept only), improving convergence rates, and providing more accurate effect size estimates, despite AIC having higher power than DIC with 10 items and the most complicated true model.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"978-994"},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-31DOI: 10.1080/00273171.2024.2394607
Zhaojun Li, Lingyue Li, Bo Zhang, Mengyang Cao, Louis Tay
Two research streams on responses to Likert-type items have been developing in parallel: (a) unfolding models and (b) individual response styles (RSs). To accurately understand Likert-type item responding, it is vital to parse unfolding responses from RSs. Therefore, we propose the Unfolding Item Response Tree (UIRTree) model. First, we conducted a Monte Carlo simulation study to examine the performance of the UIRTree model compared to three other models - Samejima's Graded Response Model, Generalized Graded Unfolding Model, and Dominance Item Response Tree model, for Likert-type responses. Results showed that when data followed an unfolding response process and contained RSs, AIC was able to select the UIRTree model, while BIC was biased toward the DIRTree model in many conditions. In addition, model parameters in the UIRTree model could be accurately recovered under realistic conditions, and mis-specifying item response process or wrongly ignoring RSs was detrimental to the estimation of key parameters. Then, we used datasets from empirical studies to show that the UIRTree model could fit personality datasets well and produced more reasonable parameter estimates compared to competing models. A strong presence of RS(s) was also revealed by the UIRTree model. Finally, we provided examples with R code for UIRTree model estimation to facilitate the modeling of responses to Likert-type items in future studies.
{"title":"Killing Two Birds with One Stone: Accounting for Unfolding Item Response Process and Response Styles Using Unfolding Item Response Tree Models.","authors":"Zhaojun Li, Lingyue Li, Bo Zhang, Mengyang Cao, Louis Tay","doi":"10.1080/00273171.2024.2394607","DOIUrl":"https://doi.org/10.1080/00273171.2024.2394607","url":null,"abstract":"<p><p>Two research streams on responses to Likert-type items have been developing in parallel: (a) unfolding models and (b) individual response styles (RSs). To accurately understand Likert-type item responding, it is vital to parse unfolding responses from RSs. Therefore, we propose the Unfolding Item Response Tree (UIRTree) model. First, we conducted a Monte Carlo simulation study to examine the performance of the UIRTree model compared to three other models - Samejima's Graded Response Model, Generalized Graded Unfolding Model, and Dominance Item Response Tree model, for Likert-type responses. Results showed that when data followed an unfolding response process and contained RSs, AIC was able to select the UIRTree model, while BIC was biased toward the DIRTree model in many conditions. In addition, model parameters in the UIRTree model could be accurately recovered under realistic conditions, and mis-specifying item response process or wrongly ignoring RSs was detrimental to the estimation of key parameters. Then, we used datasets from empirical studies to show that the UIRTree model could fit personality datasets well and produced more reasonable parameter estimates compared to competing models. A strong presence of RS(s) was also revealed by the UIRTree model. Finally, we provided examples with <i>R</i> code for UIRTree model estimation to facilitate the modeling of responses to Likert-type items in future studies.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-23"},"PeriodicalIF":5.3,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-17DOI: 10.1080/00273171.2024.2386686
Nataly Beribisky, Robert A Cribbie
A popular measure of model fit in structural equation modeling (SEM) is the standardized root mean squared residual (SRMR) fit index. Equivalence testing has been used to evaluate model fit in structural equation modeling (SEM) but has yet to be applied to SRMR. Accordingly, the present study proposed equivalence-testing based fit tests for the SRMR (ESRMR). Several variations of ESRMR were introduced, incorporating different equivalence bounds and methods of computing confidence intervals. A Monte Carlo simulation study compared these novel tests with traditional methods for evaluating model fit. The results demonstrated that certain ESRMR tests based on an analytic computation of the confidence interval correctly reject poor-fitting models and are well-powered for detecting good-fitting models. We also present an illustrative example with real data to demonstrate how ESRMR may be incorporated into model fit evaluation and reporting. Our recommendation is that ESRMR tests be presented in addition to descriptive fit indices for model fit reporting in SEM.
{"title":"Equivalence Testing Based Fit Index: Standardized Root Mean Squared Residual.","authors":"Nataly Beribisky, Robert A Cribbie","doi":"10.1080/00273171.2024.2386686","DOIUrl":"https://doi.org/10.1080/00273171.2024.2386686","url":null,"abstract":"<p><p>A popular measure of model fit in structural equation modeling (SEM) is the standardized root mean squared residual (SRMR) fit index. Equivalence testing has been used to evaluate model fit in structural equation modeling (SEM) but has yet to be applied to SRMR. Accordingly, the present study proposed equivalence-testing based fit tests for the SRMR (ESRMR). Several variations of ESRMR were introduced, incorporating different equivalence bounds and methods of computing confidence intervals. A Monte Carlo simulation study compared these novel tests with traditional methods for evaluating model fit. The results demonstrated that certain ESRMR tests based on an analytic computation of the confidence interval correctly reject poor-fitting models and are well-powered for detecting good-fitting models. We also present an illustrative example with real data to demonstrate how ESRMR may be incorporated into model fit evaluation and reporting. Our recommendation is that ESRMR tests be presented in addition to descriptive fit indices for model fit reporting in SEM.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-20"},"PeriodicalIF":5.3,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141996927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1080/00273171.2024.2386060
David Jendryczko, Fridtjof W Nussbeck
The social relations model (SRM) is the standard approach for analyzing dyadic data stemming from round-robin designs. The model can be used to estimate correlation-coefficients that reflect the overall reciprocity or accuracy of judgements for individual and dyads on the sample- or population level. Within the social relations structural equation modeling framework and on the statistical grounding of stochastic measurement and classical test theory, we show how the multiple indicator SRM can be modified to capture inter-individual and inter-dyadic differences in reciprocal engagement or inter-individual differences in reciprocal accuracy. All models are illustrated on an open-access round-robin data set containing measures of mimicry, liking, and meta-liking (the belief to be liked). Results suggest that people who engage more strongly in reciprocal mimicry are liked more after an interaction with someone and that overestimating one's own popularity is strongly associated with being liked less. Further applications, advantages and limitations of the models are discussed.
{"title":"Latent Reciprocal Engagement and Accuracy Variables in Social Relations Structural Equation Modeling.","authors":"David Jendryczko, Fridtjof W Nussbeck","doi":"10.1080/00273171.2024.2386060","DOIUrl":"https://doi.org/10.1080/00273171.2024.2386060","url":null,"abstract":"<p><p>The social relations model (SRM) is the standard approach for analyzing dyadic data stemming from round-robin designs. The model can be used to estimate correlation-coefficients that reflect the overall reciprocity or accuracy of judgements for individual and dyads on the sample- or population level. Within the social relations structural equation modeling framework and on the statistical grounding of stochastic measurement and classical test theory, we show how the multiple indicator SRM can be modified to capture inter-individual and inter-dyadic differences in reciprocal engagement or inter-individual differences in reciprocal accuracy. All models are illustrated on an open-access round-robin data set containing measures of mimicry, liking, and meta-liking (the belief to be liked). Results suggest that people who engage more strongly in reciprocal mimicry are liked more after an interaction with someone and that overestimating one's own popularity is strongly associated with being liked less. Further applications, advantages and limitations of the models are discussed.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-23"},"PeriodicalIF":5.3,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1080/00273171.2024.2374826
Cara J Arizmendi, Kathleen M Gates
Idiographic measurement models such as p-technique and dynamic factor analysis (DFA) assess latent constructs at the individual level. These person-specific methods may provide more accurate models than models obtained from aggregated data when individuals are heterogeneous in their processes. Developing clustering methods for the grouping of individuals with similar measurement models would enable researchers to identify if measurement model subtypes exist across individuals as well as assess if the different models correspond to the same latent concept or not. In this paper, methods for clustering individuals based on similarity in measurement model loadings obtained from time series data are proposed. We review literature on idiographic factor modeling and measurement invariance, as well as clustering for time series analysis. Through two studies, we explore the utility and effectiveness of these measures. In Study 1, a simulation study is conducted, demonstrating the recovery of groups generated to have differing factor loadings using the proposed clustering method. In Study 2, an extension of Study 1 to DFA is presented with a simulation study. Overall, we found good recovery of simulated clusters and provide an example demonstrating the method with empirical data.
{"title":"Clustering Individuals Based on Similarity in Idiographic Factor Loading Patterns.","authors":"Cara J Arizmendi, Kathleen M Gates","doi":"10.1080/00273171.2024.2374826","DOIUrl":"10.1080/00273171.2024.2374826","url":null,"abstract":"<p><p>Idiographic measurement models such as p-technique and dynamic factor analysis (DFA) assess latent constructs at the individual level. These person-specific methods may provide more accurate models than models obtained from aggregated data when individuals are heterogeneous in their processes. Developing clustering methods for the grouping of individuals with similar measurement models would enable researchers to identify if measurement model subtypes exist across individuals as well as assess if the different models correspond to the same latent concept or not. In this paper, methods for clustering individuals based on similarity in measurement model loadings obtained from time series data are proposed. We review literature on idiographic factor modeling and measurement invariance, as well as clustering for time series analysis. Through two studies, we explore the utility and effectiveness of these measures. In <b>Study 1</b>, a simulation study is conducted, demonstrating the recovery of groups generated to have differing factor loadings using the proposed clustering method. In <b>Study 2</b>, an extension of Study 1 to DFA is presented with a simulation study. Overall, we found good recovery of simulated clusters and provide an example demonstrating the method with empirical data.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-25"},"PeriodicalIF":5.3,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1080/00273171.2024.2367485
Trà T Lê, Felix J Clouth, Jeroen K Vermunt
Bias-adjusted three-step latent class (LC) analysis is a popular technique for estimating the relationship between LC membership and distal outcomes. Since it is impossible to randomize LC membership, causal inference techniques are needed to estimate causal effects leveraging observational data. This paper proposes two novel strategies that make use of propensity scores to estimate the causal effect of LC membership on a distal outcome variable. Both strategies modify the bias-adjusted three-step approach by using propensity scores in the last step to control for confounding. The first strategy utilizes inverse propensity weighting (IPW), whereas the second strategy includes the propensity scores as control variables. Classification errors are accounted for using the BCH or ML corrections. We evaluate the performance of these methods in a simulation study by comparing it with three existing approaches that also use propensity scores in a stepwise LC analysis. Both of our newly proposed methods return essentially unbiased parameter estimates outperforming previously proposed methods. However, for smaller sample sizes our IPW based approach shows large variability in the estimates and can be prone to non-convergence. Furthermore, the use of these newly proposed methods is illustrated using data from the LISS panel.
{"title":"Causal Latent Class Analysis with Distal Outcomes: A Modified Three-Step Method Using Inverse Propensity Weighting.","authors":"Trà T Lê, Felix J Clouth, Jeroen K Vermunt","doi":"10.1080/00273171.2024.2367485","DOIUrl":"https://doi.org/10.1080/00273171.2024.2367485","url":null,"abstract":"<p><p>Bias-adjusted three-step latent class (LC) analysis is a popular technique for estimating the relationship between LC membership and distal outcomes. Since it is impossible to randomize LC membership, causal inference techniques are needed to estimate causal effects leveraging observational data. This paper proposes two novel strategies that make use of propensity scores to estimate the causal effect of LC membership on a distal outcome variable. Both strategies modify the bias-adjusted three-step approach by using propensity scores in the last step to control for confounding. The first strategy utilizes inverse propensity weighting (IPW), whereas the second strategy includes the propensity scores as control variables. Classification errors are accounted for using the BCH or ML corrections. We evaluate the performance of these methods in a simulation study by comparing it with three existing approaches that also use propensity scores in a stepwise LC analysis. Both of our newly proposed methods return essentially unbiased parameter estimates outperforming previously proposed methods. However, for smaller sample sizes our IPW based approach shows large variability in the estimates and can be prone to non-convergence. Furthermore, the use of these newly proposed methods is illustrated using data from the LISS panel.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-31"},"PeriodicalIF":5.3,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141735647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-12DOI: 10.1080/00273171.2024.2371816
Yanling Li, Zita Oravecz, Linying Ji, Sy-Miin Chow
Missingness in intensive longitudinal data triggered by latent factors constitute one type of nonignorable missingness that can generate simultaneous missingness across multiple items on each measurement occasion. To address this issue, we propose a multiple imputation (MI) strategy called MI-FS, which incorporates factor scores, lag/lead variables, and missing data indicators into the imputation model. In the context of process factor analysis (PFA), we conducted a Monte Carlo simulation study to compare the performance of MI-FS to listwise deletion (LD), MI with manifest variables (MI-MV, which implements MI on both dependent variables and covariates), and partial MI with MVs (PMI-MV, which implements MI on covariates and handles missing dependent variables via full-information maximum likelihood) under different conditions. Across conditions, we found MI-based methods overall outperformed the LD; the MI-FS approach yielded lower root mean square errors (RMSEs) and higher coverage rates for auto-regression (AR) parameters compared to MI-MV; and the PMI-MV and MI-MV approaches yielded higher coverage rates for most parameters except AR parameters compared to MI-FS. These approaches were also compared using an empirical example investigating the relationships between negative affect and perceived stress over time. Recommendations on when and how to incorporate factor scores into MI processes were discussed.
{"title":"Multiple Imputation with Factor Scores: A Practical Approach for Handling Simultaneous Missingness Across Items in Longitudinal Designs.","authors":"Yanling Li, Zita Oravecz, Linying Ji, Sy-Miin Chow","doi":"10.1080/00273171.2024.2371816","DOIUrl":"10.1080/00273171.2024.2371816","url":null,"abstract":"<p><p>Missingness in intensive longitudinal data triggered by latent factors constitute one type of nonignorable missingness that can generate simultaneous missingness across multiple items on each measurement occasion. To address this issue, we propose a multiple imputation (MI) strategy called MI-FS, which incorporates factor scores, lag/lead variables, and missing data indicators into the imputation model. In the context of process factor analysis (PFA), we conducted a Monte Carlo simulation study to compare the performance of MI-FS to listwise deletion (LD), MI with manifest variables (MI-MV, which implements MI on both dependent variables and covariates), and partial MI with MVs (PMI-MV, which implements MI on covariates and handles missing dependent variables <i>via</i> full-information maximum likelihood) under different conditions. Across conditions, we found MI-based methods overall outperformed the LD; the MI-FS approach yielded lower root mean square errors (RMSEs) and higher coverage rates for auto-regression (AR) parameters compared to MI-MV; and the PMI-MV and MI-MV approaches yielded higher coverage rates for most parameters except AR parameters compared to MI-FS. These approaches were also compared using an empirical example investigating the relationships between negative affect and perceived stress over time. Recommendations on when and how to incorporate factor scores into MI processes were discussed.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-29"},"PeriodicalIF":5.3,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-05-09DOI: 10.1080/00273171.2024.2315549
Haley E Yaremych, Kristopher J Preacher
In multilevel models, disaggregating predictors into level-specific parts (typically accomplished via centering) benefits parameter estimates and their interpretations. However, the importance of level-specificity has been sparsely addressed in multilevel literature concerning collinearity. In this study, we develop novel insights into the interactivity of centering and collinearity in multilevel models. After integrating the broad literatures on centering and collinearity, we review level-specific and conflated correlations in multilevel data. Next, by deriving formal relationships between predictor collinearity and multilevel model estimates, we demonstrate how the consequences of collinearity change across different centering specifications and identify data characteristics that may exacerbate or mitigate those consequences. We show that when all or some level-1 predictors are uncentered, slope estimates can be greatly biased by collinearity. Disaggregation of all predictors eliminates the possibility that fixed effect estimates will be biased due to collinearity alone; however, under some data conditions, collinearity is associated with biased standard errors and random effect (co)variance estimates. Finally, we illustrate the importance of disaggregation for diagnosing collinearity in multilevel data and provide recommendations for the use of level-specific collinearity diagnostics. Overall, the necessity of disaggregation for identifying and managing collinearity's consequences in multilevel models is clarified in novel ways.
{"title":"Understanding the Consequences of Collinearity for Multilevel Models: The Importance of Disaggregation Across Levels.","authors":"Haley E Yaremych, Kristopher J Preacher","doi":"10.1080/00273171.2024.2315549","DOIUrl":"10.1080/00273171.2024.2315549","url":null,"abstract":"<p><p>In multilevel models, disaggregating predictors into level-specific parts (typically accomplished via centering) benefits parameter estimates and their interpretations. However, the importance of level-specificity has been sparsely addressed in multilevel literature concerning collinearity. In this study, we develop novel insights into the interactivity of centering and collinearity in multilevel models. After integrating the broad literatures on centering and collinearity, we review level-specific and conflated correlations in multilevel data. Next, by deriving formal relationships between predictor collinearity and multilevel model estimates, we demonstrate how the consequences of collinearity change across different centering specifications and identify data characteristics that may exacerbate or mitigate those consequences. We show that when all or some level-1 predictors are uncentered, slope estimates can be greatly biased by collinearity. Disaggregation of all predictors eliminates the possibility that fixed effect estimates will be biased due to collinearity alone; however, under some data conditions, collinearity is associated with biased standard errors and random effect (co)variance estimates. Finally, we illustrate the importance of disaggregation for diagnosing collinearity in multilevel data and provide recommendations for the use of level-specific collinearity diagnostics. Overall, the necessity of disaggregation for identifying and managing collinearity's consequences in multilevel models is clarified in novel ways.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"693-715"},"PeriodicalIF":5.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140900141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-05-31DOI: 10.1080/00273171.2024.2335394
Gemma Hammerton, Jon Heron, Katie Lewis, Kate Tilling, Stijn Vansteelandt
Latent classes are a useful tool in developmental research, however there are challenges associated with embedding them within a counterfactual mediation model. We develop and test a new method "updated pseudo class draws (uPCD)" to examine the association between a latent class exposure and distal outcome that could easily be extended to allow the use of any counterfactual mediation method. UPCD extends an existing group of methods (based on pseudo class draws) that assume that the true values of the latent class variable are missing, and need to be multiply imputed using class membership probabilities. We simulate data based on the Avon Longitudinal Study of Parents and Children, examine performance for existing techniques to relate a latent class exposure to a distal outcome ("one-step," "bias-adjusted three-step," "modal class assignment," "non-inclusive pseudo class draws," and "inclusive pseudo class draws") and compare bias in parameter estimates and their precision to uPCD when estimating counterfactual mediation effects. We found that uPCD shows minimal bias when estimating counterfactual mediation effects across all levels of entropy. UPCD performs similarly to recommended methods (one-step and bias-adjusted three-step), but provides greater flexibility and scope for incorporating the latent grouping within any commonly-used counterfactual mediation approach.
潜类是发展研究中的一个有用工具,但将其嵌入反事实中介模型却面临挑战。我们开发并测试了一种新方法 "更新伪类抽样(uPCD)",用于检验潜类暴露与远端结果之间的关联,这种方法可以很容易地扩展到任何反事实中介方法。UPCD 扩展了现有的一组方法(基于伪类抽样),这些方法假定潜类变量的真实值是缺失的,需要使用类成员概率进行多重估算。我们模拟了雅芳父母与子女纵向研究(Avon Longitudinal Study of Parents and Children)的数据,考察了将潜类暴露与远端结果相关联的现有技术("一步法"、"偏差调整三步法"、"模态类分配"、"非包容性伪类抽样 "和 "包容性伪类抽样")的性能,并在估计反事实中介效应时,比较了参数估计的偏差及其与 uPCD 的精度。我们发现,uPCD 在估计所有熵水平的反事实中介效应时,偏差最小。UPCD 的表现与推荐方法(一步法和偏差调整三步法)相似,但提供了更大的灵活性和范围,可将潜在分组纳入任何常用的反事实中介方法中。
{"title":"Counterfactual Mediation Analysis with a Latent Class Exposure.","authors":"Gemma Hammerton, Jon Heron, Katie Lewis, Kate Tilling, Stijn Vansteelandt","doi":"10.1080/00273171.2024.2335394","DOIUrl":"10.1080/00273171.2024.2335394","url":null,"abstract":"<p><p>Latent classes are a useful tool in developmental research, however there are challenges associated with embedding them within a counterfactual mediation model. We develop and test a new method \"updated pseudo class draws (uPCD)\" to examine the association between a latent class exposure and distal outcome that could easily be extended to allow the use of any counterfactual mediation method. UPCD extends an existing group of methods (based on pseudo class draws) that assume that the true values of the latent class variable are missing, and need to be multiply imputed using class membership probabilities. We simulate data based on the Avon Longitudinal Study of Parents and Children, examine performance for existing techniques to relate a latent class exposure to a distal outcome (\"one-step,\" \"bias-adjusted three-step,\" \"modal class assignment,\" \"non-inclusive pseudo class draws,\" and \"inclusive pseudo class draws\") and compare bias in parameter estimates and their precision to uPCD when estimating counterfactual mediation effects. We found that uPCD shows minimal bias when estimating counterfactual mediation effects across all levels of entropy. UPCD performs similarly to recommended methods (one-step and bias-adjusted three-step), but provides greater flexibility and scope for incorporating the latent grouping within any commonly-used counterfactual mediation approach.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"818-840"},"PeriodicalIF":5.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11286213/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141184867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}