Pub Date : 2025-05-01Epub Date: 2025-01-22DOI: 10.1080/00273171.2025.2450648
K B S Huth, B DeLong, L Waldorp, M Marsman, M Rhemtulla
Psychometric networks can be estimated using nodewise regression to estimate edge weights when the joint distribution is analytically difficult to derive or the estimation is too computationally intensive. The nodewise approach runs generalized linear models with each node as the outcome. Two regression coefficients are obtained for each link, which need to be aggregated to obtain the edge weight (i.e., the conditional association). The nodewise approach has been shown to reveal the true graph structure. However, for continuous variables, the regression coefficients are scaled differently than the partial correlations, and therefore the nodewise approach may lead to different edge weights. Here, the aggregation of the two regression coefficients is crucial in obtaining the true partial correlation. We show that when the correlations of the two predictors with the control variables are different, averaging the regression coefficients leads to an asymptotically biased estimator of the partial correlation. This is likely to occur when a variable has a high correlation with other nodes in the network (e.g., variables in the same domain) and a lower correlation with another node (e.g., variables in a different domain). We discuss two different ways of aggregating the regression weights, which can obtain the true partial correlation: first, multiplying the weights and taking their square root, and second, rescaling the regression weight by the residual variances. The two latter estimators can recover the true network structure and edge weights.
{"title":"Nodewise Parameter Aggregation for Psychometric Networks.","authors":"K B S Huth, B DeLong, L Waldorp, M Marsman, M Rhemtulla","doi":"10.1080/00273171.2025.2450648","DOIUrl":"10.1080/00273171.2025.2450648","url":null,"abstract":"<p><p>Psychometric networks can be estimated using nodewise regression to estimate edge weights when the joint distribution is analytically difficult to derive or the estimation is too computationally intensive. The nodewise approach runs generalized linear models with each node as the outcome. Two regression coefficients are obtained for each link, which need to be aggregated to obtain the edge weight (i.e., the conditional association). The nodewise approach has been shown to reveal the true graph structure. However, for continuous variables, the regression coefficients are scaled differently than the partial correlations, and therefore the nodewise approach may lead to different edge weights. Here, the aggregation of the two regression coefficients is crucial in obtaining the true partial correlation. We show that when the correlations of the two predictors with the control variables are different, averaging the regression coefficients leads to an asymptotically biased estimator of the partial correlation. This is likely to occur when a variable has a high correlation with other nodes in the network (e.g., variables in the same domain) and a lower correlation with another node (e.g., variables in a different domain). We discuss two different ways of aggregating the regression weights, which can obtain the true partial correlation: first, multiplying the weights and taking their square root, and second, rescaling the regression weight by the residual variances. The two latter estimators can recover the true network structure and edge weights.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"509-517"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-02-12DOI: 10.1080/00273171.2025.2453454
Matthew J Madison, Minjeong Jeon, Michael Cotterell, Sergio Haab, Selay Zor
Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or non-proficiency of specified latent attributes. Longitudinal DCMs have recently been developed as psychometric models for modeling changes in examinee proficiency statuses over time. Currently, software programs for estimating longitudinal DCMs are limited in functionality and generality, expensive, or cumbersome for applied researchers. This manuscript describes and demonstrates a newly developed R package for estimating a general longitudinal DCM, the transition diagnostic classification model.
{"title":"TDCM: An R Package for Estimating Longitudinal Diagnostic Classification Models.","authors":"Matthew J Madison, Minjeong Jeon, Michael Cotterell, Sergio Haab, Selay Zor","doi":"10.1080/00273171.2025.2453454","DOIUrl":"10.1080/00273171.2025.2453454","url":null,"abstract":"<p><p>Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or non-proficiency of specified latent attributes. Longitudinal DCMs have recently been developed as psychometric models for modeling changes in examinee proficiency statuses over time. Currently, software programs for estimating longitudinal DCMs are limited in functionality and generality, expensive, or cumbersome for applied researchers. This manuscript describes and demonstrates a newly developed R package for estimating a general longitudinal DCM, the transition diagnostic classification model.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"518-527"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143400729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-02-14DOI: 10.1080/00273171.2025.2455497
Austin Wyman, Zhiyong Zhang
Automated detection of facial emotions has been an interesting topic for multiple decades in social and behavioral research but is only possible very recently. In this tutorial, we review three popular artificial intelligence based emotion detection programs that are accessible to R programmers: Google Cloud Vision, Amazon Rekognition, and Py-Feat. We present their advantages, disadvantages, and provide sample code so that researchers can immediately begin designing, collecting, and analyzing emotion data. Furthermore, we provide an introductory level explanation of the machine learning, deep learning, and computer vision algorithms that underlie most emotion detection programs in order to improve literacy of explainable artificial intelligence in the social and behavioral science literature.
{"title":"A Tutorial on the Use of Artificial Intelligence Tools for Facial Emotion Recognition in R.","authors":"Austin Wyman, Zhiyong Zhang","doi":"10.1080/00273171.2025.2455497","DOIUrl":"10.1080/00273171.2025.2455497","url":null,"abstract":"<p><p>Automated detection of facial emotions has been an interesting topic for multiple decades in social and behavioral research but is only possible very recently. In this tutorial, we review three popular artificial intelligence based emotion detection programs that are accessible to R programmers: Google Cloud Vision, Amazon Rekognition, and Py-Feat. We present their advantages, disadvantages, and provide sample code so that researchers can immediately begin designing, collecting, and analyzing emotion data. Furthermore, we provide an introductory level explanation of the machine learning, deep learning, and computer vision algorithms that underlie most emotion detection programs in order to improve literacy of explainable artificial intelligence in the social and behavioral science literature.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"641-655"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143416123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-01-22DOI: 10.1080/00273171.2024.2444943
Mijke Rhemtulla, Victoria Savalei
In this tutorial, we clarify the distinction between estimated factor scores, which are weighted composites of observed variables, and true factor scores, which are unobservable values of the underlying latent variable. Using an analogy with linear regression, we show how predicted values in linear regression share the properties of the most common type of factor score estimates, regression factor scores, computed from single-indicator and multiple indicator latent variable models. Using simulated data from 1- and 2-factor models, we also show how the amount of measurement error affects the reliability of regression factor scores, and compare the performance of regression factor scores with that of unweighted sum scores.
{"title":"Estimated Factor Scores Are Not True Factor Scores.","authors":"Mijke Rhemtulla, Victoria Savalei","doi":"10.1080/00273171.2024.2444943","DOIUrl":"10.1080/00273171.2024.2444943","url":null,"abstract":"<p><p>In this tutorial, we clarify the distinction between estimated factor scores, which are weighted composites of observed variables, and true factor scores, which are unobservable values of the underlying latent variable. Using an analogy with linear regression, we show how predicted values in linear regression share the properties of the most common type of factor score estimates, regression factor scores, computed from single-indicator and multiple indicator latent variable models. Using simulated data from 1- and 2-factor models, we also show how the amount of measurement error affects the reliability of regression factor scores, and compare the performance of regression factor scores with that of unweighted sum scores.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"598-619"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-02-03DOI: 10.1080/00273171.2024.2444940
Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark
We propose interrater reliability coefficients for observational interdependent social network data, which are dyadic data from a network of interacting subjects that are observed by external raters. Using the social relations model, dyadic scores of subjects' behaviors during these interactions can be decomposed into actor, partner, and relationship effects. These effects constitute different facets of theoretical interest about which researchers formulate research questions. Based on generalizability theory, we extended the social relations model with rater effects, resulting in a model that decomposes the variance of dyadic observational data into effects of actors, partners, relationships, raters, and their statistical interactions. We used the variances of these effects to define intraclass correlation coefficients (ICCs) that indicate the extent the actor, partner, and relationship effects can be generalized across external raters. We proposed Markov chain Monte Carlo estimation of a Bayesian hierarchical linear model to estimate the ICCs, and tested their bias and coverage in a simulation study. The method is illustrated using data on social mimicry.
{"title":"Interrater Reliability for Interdependent Social Network Data: A Generalizability Theory Approach.","authors":"Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark","doi":"10.1080/00273171.2024.2444940","DOIUrl":"10.1080/00273171.2024.2444940","url":null,"abstract":"<p><p>We propose interrater reliability coefficients for observational interdependent social network data, which are dyadic data from a network of interacting subjects that are observed by external raters. Using the social relations model, dyadic scores of subjects' behaviors during these interactions can be decomposed into actor, partner, and relationship effects. These effects constitute different facets of theoretical interest about which researchers formulate research questions. Based on generalizability theory, we extended the social relations model with rater effects, resulting in a model that decomposes the variance of dyadic observational data into effects of actors, partners, relationships, raters, and their statistical interactions. We used the variances of these effects to define intraclass correlation coefficients (ICCs) that indicate the extent the actor, partner, and relationship effects can be generalized across external raters. We proposed Markov chain Monte Carlo estimation of a Bayesian hierarchical linear model to estimate the ICCs, and tested their bias and coverage in a simulation study. The method is illustrated using data on social mimicry.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"444-459"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-03-23DOI: 10.1080/00273171.2025.2454901
Hudson Golino, John Nesselroade, Alexander P Christensen
In the last half of the twentieth century, psychology and neuroscience have experienced a renewed interest in intraindividual variation. To date, there are few quantitative methods to evaluate whether a population (between-person) structure is likely to hold for individual people, often referred to as ergodicity. We introduce a new network information theoretic metric, the ergodicity information index (EII), that quantifies the amount of information lost by representing all individuals with a between-person structure. A Monte Carlo simulation demonstrated that EII can effectively delineate between ergodic and nonergodic systems. A bootstrap test is derived to statistically determine whether the empirical data is likely generated from an ergodic process. When a process is identified as nonergodic, then it's possible that a mixture of groups exist. To evaluate whether groups exist, we develop an information theoretic clustering method to detect groups. Finally, two empirical examples are presented using intensive longitudinal data from personality and neuroscience domains. Both datasets were found to be nonergodic, and meaningful groupings were identified in each dataset. Subsequent analysis showed that some of these groups are ergodic, meaning that the individuals can be represented with a single population structure without significant loss of information. Notably, in the neuroscience data, we could correctly identify two clusters of individuals (young vs. older adults) measured by a pattern separation task that were related to hippocampal connectivity to the default mode network.
{"title":"Toward a Psychology of Individuals: The Ergodicity Information Index and a Bottom-up Approach for Finding Generalizations.","authors":"Hudson Golino, John Nesselroade, Alexander P Christensen","doi":"10.1080/00273171.2025.2454901","DOIUrl":"10.1080/00273171.2025.2454901","url":null,"abstract":"<p><p>In the last half of the twentieth century, psychology and neuroscience have experienced a renewed interest in intraindividual variation. To date, there are few quantitative methods to evaluate whether a population (between-person) structure is likely to hold for individual people, often referred to as ergodicity. We introduce a new network information theoretic metric, the ergodicity information index (EII), that quantifies the amount of information lost by representing all individuals with a between-person structure. A Monte Carlo simulation demonstrated that EII can effectively delineate between ergodic and nonergodic systems. A bootstrap test is derived to statistically determine whether the empirical data is likely generated from an ergodic process. When a process is identified as nonergodic, then it's possible that a mixture of groups exist. To evaluate whether groups exist, we develop an information theoretic clustering method to detect groups. Finally, two empirical examples are presented using intensive longitudinal data from personality and neuroscience domains. Both datasets were found to be nonergodic, and meaningful groupings were identified in each dataset. Subsequent analysis showed that some of these groups are ergodic, meaning that the individuals can be represented with a single population structure without significant loss of information. Notably, in the neuroscience data, we could correctly identify two clusters of individuals (young vs. older adults) measured by a pattern separation task that were related to hippocampal connectivity to the default mode network.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"528-555"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-03-13DOI: 10.1080/00273171.2024.2414479
Arianne Herrera-Bennett, Mijke Rhemtulla
Work surrounding the replicability and generalizability of network models has increased in recent years, prompting debate on whether network properties can be expected to be consistent across samples. To date, certain methodological practices may have contributed to observed inconsistencies, including use of single-item indicators and non-identical measurement tools. The current study used a resampling approach to disentangle the effects of sampling variability from scale variability when assessing network replicability in empirical data. Additionally, we explored whether consistencies in network characteristics were improved when more items were aggregated to estimate node scores, which we hypothesized should yield more representative measures of latent constructs. Overall, using different scales produced more variability in network properties than using different samples, but these discrepancies were markedly reduced with larger samples and greater node aggregation. Findings underscored the impact of aggregating items when estimating nodes: Multi-item indicators led to denser networks, higher network sensitivity, greater estimates of global strength, and greater levels of consistency in network properties (e.g., edge weights, centrality scores). Taken together, variability in network properties across samples may arise from poor measurement conditions; additionally, variability may reflect properties of the true network model and/or the measurement instrument. All data and syntax are openly available online (https://osf.io/m37q2/).
{"title":"Exploring the Effects of Sampling Variability, Scale Variability, and Node Aggregation on the Consistency of Estimated Networks.","authors":"Arianne Herrera-Bennett, Mijke Rhemtulla","doi":"10.1080/00273171.2024.2414479","DOIUrl":"10.1080/00273171.2024.2414479","url":null,"abstract":"<p><p>Work surrounding the replicability and generalizability of network models has increased in recent years, prompting debate on whether network properties can be expected to be consistent across samples. To date, certain methodological practices may have contributed to observed inconsistencies, including use of single-item indicators and non-identical measurement tools. The current study used a resampling approach to disentangle the effects of sampling variability from scale variability when assessing network replicability in empirical data. Additionally, we explored whether consistencies in network characteristics were improved when more items were aggregated to estimate node scores, which we hypothesized should yield more representative measures of latent constructs. Overall, using different scales produced more variability in network properties than using different samples, but these discrepancies were markedly reduced with larger samples and greater node aggregation. Findings underscored the impact of aggregating items when estimating nodes: Multi-item indicators led to denser networks, higher network sensitivity, greater estimates of global strength, and greater levels of consistency in network properties (e.g., edge weights, centrality scores). Taken together, variability in network properties across samples may arise from poor measurement conditions; additionally, variability may reflect properties of the true network model and/or the measurement instrument. All data and syntax are openly available online (https://osf.io/m37q2/).</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"275-295"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143626798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-10-20DOI: 10.1080/00273171.2024.2412682
Erik Sengewald, Katinka Hardt, Marie-Ann Sengewald
Among the most important merits of modern missing data techniques such as multiple imputation (MI) and full-information maximum likelihood estimation is the possibility to include additional information about the missingness process via auxiliary variables. During the past decade, the choice of auxiliary variables has been investigated under a variety of different conditions and more recent research points to the potentially biasing effect of certain auxiliary variables, particularly colliders (Thoemmes & Rose, 2014). In this article, we further extend biasing mechanisms of certain auxiliary variables considered in previous research and thereby focus on their effects on individual diagnosis based on norming, in which the whole distribution of a variable is of interest rather than average coefficients (e.g., means). For this, we first provide the theoretical underpinnings of the mechanisms under study and then provide two focused simulations that (i) directly expand on the collider scenario in Thoemmes and Rose (2014, appendix A) by considering outcomes that are relevant to norming and (ii) extend the scenarios under consideration by instrumental variable mechanisms. We illustrate the bias mechanisms for two different norming approaches and exemplify the procedures by means of an empirical example. We end by discussing limitations and implications of our research.
{"title":"A Causal View on Bias in Missing Data Imputation: The Impact of Evil Auxiliary Variables on Norming of Test Scores.","authors":"Erik Sengewald, Katinka Hardt, Marie-Ann Sengewald","doi":"10.1080/00273171.2024.2412682","DOIUrl":"10.1080/00273171.2024.2412682","url":null,"abstract":"<p><p>Among the most important merits of modern missing data techniques such as multiple imputation (MI) and full-information maximum likelihood estimation is the possibility to include additional information about the missingness process via auxiliary variables. During the past decade, the choice of auxiliary variables has been investigated under a variety of different conditions and more recent research points to the potentially biasing effect of certain auxiliary variables, particularly colliders (Thoemmes & Rose, 2014). In this article, we further extend biasing mechanisms of certain auxiliary variables considered in previous research and thereby focus on their effects on individual diagnosis based on norming, in which the whole distribution of a variable is of interest rather than average coefficients (e.g., means). For this, we first provide the theoretical underpinnings of the mechanisms under study and then provide two focused simulations that (i) directly expand on the collider scenario in Thoemmes and Rose (2014, appendix A) by considering outcomes that are relevant to norming and (ii) extend the scenarios under consideration by instrumental variable mechanisms. We illustrate the bias mechanisms for two different norming approaches and exemplify the procedures by means of an empirical example. We end by discussing limitations and implications of our research.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"258-274"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-10-29DOI: 10.1080/00273171.2024.2418515
Judith J M Rijnhart, Matthew J Valente, David P MacKinnon
Despite previous warnings against the use of the difference-in-coefficients method for estimating the indirect effect when the outcome in the mediation model is binary, the difference-in-coefficients method remains readily used in a variety of fields. The continued use of this method is presumably because of the lack of awareness that this method conflates the indirect effect estimate and non-collapsibility. In this paper, we aim to demonstrate the problems associated with the difference-in-coefficients method for estimating indirect effects for mediation models with binary outcomes. We provide a formula that decomposes the difference-in-coefficients estimate into (1) an estimate of non-collapsibility, and (2) an indirect effect estimate. We use a simulation study and an empirical data example to illustrate the impact of non-collapsibility on the difference-in-coefficients estimate of the indirect effect. Further, we demonstrate the application of several alternative methods for estimating the indirect effect, including the product-of-coefficients method and regression-based causal mediation analysis. The results emphasize the importance of choosing a method for estimating the indirect effect that is not affected by non-collapsibility.
{"title":"Why You Should Not Estimate Mediated Effects Using the Difference-in-Coefficients Method When the Outcome is Binary.","authors":"Judith J M Rijnhart, Matthew J Valente, David P MacKinnon","doi":"10.1080/00273171.2024.2418515","DOIUrl":"10.1080/00273171.2024.2418515","url":null,"abstract":"<p><p>Despite previous warnings against the use of the difference-in-coefficients method for estimating the indirect effect when the outcome in the mediation model is binary, the difference-in-coefficients method remains readily used in a variety of fields. The continued use of this method is presumably because of the lack of awareness that this method conflates the indirect effect estimate and non-collapsibility. In this paper, we aim to demonstrate the problems associated with the difference-in-coefficients method for estimating indirect effects for mediation models with binary outcomes. We provide a formula that decomposes the difference-in-coefficients estimate into (1) an estimate of non-collapsibility, and (2) an indirect effect estimate. We use a simulation study and an empirical data example to illustrate the impact of non-collapsibility on the difference-in-coefficients estimate of the indirect effect. Further, we demonstrate the application of several alternative methods for estimating the indirect effect, including the product-of-coefficients method and regression-based causal mediation analysis. The results emphasize the importance of choosing a method for estimating the indirect effect that is not affected by non-collapsibility.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"296-304"},"PeriodicalIF":3.5,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11991894/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-11-26DOI: 10.1080/00273171.2024.2428222
Lydia G Speyer, Xinxin Zhu, Yi Yang, Denis Ribeaud, Manuel Eisner
Random-intercept cross-lagged panel models (RI-CLPMs) are increasingly used to investigate research questions focusing on how one variable at one time point affects another variable at the subsequent time point. Due to the implied temporal sequence of events in such research designs, interpretations of RI-CLPMs primarily focus on longitudinal cross-lagged paths while disregarding concurrent associations and modeling these only as residual covariances. However, this may cause biased cross-lagged effects. This may be especially so when data collected at the same time point refers to different reference timeframes, creating a temporal sequence of events for constructs measured concurrently. To examine this issue, we conducted a series of empirical analyses in which the impact of modeling or not modeling of directional within-time point associations may impact inferences drawn from RI-CLPMs using data from the longitudinal z-proso study. Results highlight that not considering directional concurrent effects may lead to biased cross-lagged effects. Thus, it is essential to carefully consider potential directional concurrent effects when choosing models to analyze directional associations between variables over time. If temporal sequences of concurrent effects cannot be clearly established, testing multiple models and drawing conclusions based on the robustness of effects across all models is recommended.
{"title":"On the Importance of Considering Concurrent Effects in Random-Intercept Cross-Lagged Panel Modelling: Example Analysis of Bullying and Internalising Problems.","authors":"Lydia G Speyer, Xinxin Zhu, Yi Yang, Denis Ribeaud, Manuel Eisner","doi":"10.1080/00273171.2024.2428222","DOIUrl":"10.1080/00273171.2024.2428222","url":null,"abstract":"<p><p>Random-intercept cross-lagged panel models (RI-CLPMs) are increasingly used to investigate research questions focusing on how one variable at one time point affects another variable at the subsequent time point. Due to the implied temporal sequence of events in such research designs, interpretations of RI-CLPMs primarily focus on longitudinal cross-lagged paths while disregarding concurrent associations and modeling these only as residual covariances. However, this may cause biased cross-lagged effects. This may be especially so when data collected at the same time point refers to different reference timeframes, creating a temporal sequence of events for constructs measured concurrently. To examine this issue, we conducted a series of empirical analyses in which the impact of modeling or not modeling of directional within-time point associations may impact inferences drawn from RI-CLPMs using data from the longitudinal z-proso study. Results highlight that not considering directional concurrent effects may lead to biased cross-lagged effects. Thus, it is essential to carefully consider potential directional concurrent effects when choosing models to analyze directional associations between variables over time. If temporal sequences of concurrent effects cannot be clearly established, testing multiple models and drawing conclusions based on the robustness of effects across all models is recommended.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"328-344"},"PeriodicalIF":5.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996063/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}