Pub Date : 2026-01-14DOI: 10.1080/00273171.2026.2613311
Ethan M McCormick
There has been a growing interest in using earlier change to predict downstream distal outcomes in development; however, prior work has mostly focused on estimating the unique effect of the different growth parameters (e.g., intercept and slope) rather than focusing on the trajectory as a whole. Here I lay out a distal outcome latent curve model with latent interactions which attempts to model the joint effect of growth parameters on these later outcomes. I show again that these models require us to contend with unintuitive time coding effects which can impact the direction and significance of effects and that plotting and probing are necessary for disambiguating these joint effects. These graphical approaches emphasize practical steps for applied researchers in understanding these effects. I then outline how future research can help clarify optimal approaches for using the trajectory as a whole rather than the unique effects of its individual sub-components.
{"title":"Moderating the Consequences of Longitudinal Change for Distal Outcomes.","authors":"Ethan M McCormick","doi":"10.1080/00273171.2026.2613311","DOIUrl":"https://doi.org/10.1080/00273171.2026.2613311","url":null,"abstract":"<p><p>There has been a growing interest in using earlier change to predict downstream distal outcomes in development; however, prior work has mostly focused on estimating the unique effect of the different growth parameters (e.g., intercept and slope) rather than focusing on the trajectory as a whole. Here I lay out a distal outcome latent curve model with latent interactions which attempts to model the <i>joint</i> effect of growth parameters on these later outcomes. I show again that these models require us to contend with unintuitive time coding effects which can impact the direction and significance of effects and that plotting and probing are necessary for disambiguating these joint effects. These graphical approaches emphasize practical steps for applied researchers in understanding these effects. I then outline how future research can help clarify optimal approaches for using the trajectory as a whole rather than the unique effects of its individual sub-components.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-19"},"PeriodicalIF":3.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-02DOI: 10.1080/00273171.2025.2552304
Melanie V Partsch, David Goretzko
Despite the popularity of structural equation modeling in psychological research, accurately evaluating the fit of these models to data is still challenging. Using fixed fit index cutoffs is error-prone due to the fit indices' dependence on various features of the model and data ("nuisance parameters"). Nonetheless, applied researchers mostly rely on fixed fit index cutoffs, neglecting the risk of falsely accepting (or rejecting) their model. With the goal of developing a broadly applicable method that is almost independent of nuisance parameters, we introduce a machine learning (ML)-based approach to evaluate the fit of multi-factorial measurement models. We trained an ML model based on 173 model and data features that we extracted from 1,323,866 simulated data sets and models fitted by means of confirmatory factor analysis. We evaluated the performance of the ML model based on 1,659,386 independent test observations. The ML model performed very well in detecting model (mis-)fit in most conditions, hereby outperforming commonly used fixed fit index cutoffs across the board. Only minor misspecifications, such as a single neglected residual correlation, proved to be challenging to detect. This proof-of-concept study shows that ML is very promising in the context of model fit evaluation.
{"title":"Detecting Model Misfit in Structural Equation Modeling with Machine Learning-A Proof of Concept.","authors":"Melanie V Partsch, David Goretzko","doi":"10.1080/00273171.2025.2552304","DOIUrl":"10.1080/00273171.2025.2552304","url":null,"abstract":"<p><p>Despite the popularity of structural equation modeling in psychological research, accurately evaluating the fit of these models to data is still challenging. Using fixed fit index cutoffs is error-prone due to the fit indices' dependence on various features of the model and data (\"nuisance parameters\"). Nonetheless, applied researchers mostly rely on fixed fit index cutoffs, neglecting the risk of falsely accepting (or rejecting) their model. With the goal of developing a broadly applicable method that is almost independent of nuisance parameters, we introduce a machine learning (ML)-based approach to evaluate the fit of multi-factorial measurement models. We trained an ML model based on 173 model and data features that we extracted from 1,323,866 simulated data sets and models fitted by means of confirmatory factor analysis. We evaluated the performance of the ML model based on 1,659,386 independent test observations. The ML model performed very well in detecting model (mis-)fit in most conditions, hereby outperforming commonly used fixed fit index cutoffs across the board. Only minor misspecifications, such as a single neglected residual correlation, proved to be challenging to detect. This proof-of-concept study shows that ML is very promising in the context of model fit evaluation.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-24"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145433132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-17DOI: 10.1080/00273171.2025.2565598
Flora Le, Dorothea Dumuid, Tyman E Stanford, Joshua F Wiley
Multilevel compositional data, such as data sampled over time that are non-negative and sum to a constant value, are common in various fields. However, there is currently no software specifically built to model compositional data in a multilevel framework. The R package multilevelcoda implements a collection of tools for modeling compositional data in a Bayesian multivariate, multilevel pipeline. The user-friendly setup only requires the data, model formula, and minimal specification of the analysis. This article outlines the statistical theory underlying the Bayesian compositional multilevel modeling approach and details the implementation of the functions available in multilevelcoda, using an example dataset of compositional daily sleep-wake behaviors. This innovative method can be used to robustly answer scientific questions from the increasingly available multilevel compositional data from intensive, longitudinal studies.
{"title":"Bayesian Multilevel Compositional Data Analysis with the R Package <i>multilevelcoda</i>.","authors":"Flora Le, Dorothea Dumuid, Tyman E Stanford, Joshua F Wiley","doi":"10.1080/00273171.2025.2565598","DOIUrl":"10.1080/00273171.2025.2565598","url":null,"abstract":"<p><p>Multilevel compositional data, such as data sampled over time that are non-negative and sum to a constant value, are common in various fields. However, there is currently no software specifically built to model compositional data in a multilevel framework. The <b>R</b> package <b><i>multilevelcoda</i></b> implements a collection of tools for modeling compositional data in a Bayesian multivariate, multilevel pipeline. The user-friendly setup only requires the data, model formula, and minimal specification of the analysis. This article outlines the statistical theory underlying the Bayesian compositional multilevel modeling approach and details the implementation of the functions available in <b><i>multilevelcoda</i></b>, using an example dataset of compositional daily sleep-wake behaviors. This innovative method can be used to robustly answer scientific questions from the increasingly available multilevel compositional data from intensive, longitudinal studies.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"192-210"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145543877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-01DOI: 10.1080/00273171.2025.2561945
Haoran Li, Wen Luo
Single-case experimental designs (SCEDs) involve repeated measurements of a small number of cases under different experimental conditions, offering valuable insights into treatment effects. However, challenges arise in the analysis of SCEDs when autocorrelation is present in the data. Recently, generalized linear mixed models (GLMMs) have emerged as a promising statistical approach for SCEDs with count outcomes. While prior research has demonstrated the effectiveness of GLMMs, these studies have typically assumed error independence, an assumption that may be violated in SCEDs due to serial dependency. This study aims to evaluate two possible solutions for autocorrelated SCED count data: 1) to assess the robustness of previously introduced GLMMs such as Poisson, negative binomial, and observation-level random effects models under various levels of autocorrelation, and 2) to evaluate the performance of a new GLMM and a linear mixed model (LMM), both of which incorporate an autoregressive error structure. Through a Monte Carlo simulation study, we have examined bias, coverage rates, and Type I error rates of treatment effect estimators, providing recommendations for handling autocorrelation in the analysis of SCED count data. A demonstration with real SCED count data is provided. The implications, limitations, and future research directions are also discussed.
{"title":"Analyzing Count Data in Single Case Experimental Designs with Generalized Linear Mixed Models: Does Serial Dependency Matter?","authors":"Haoran Li, Wen Luo","doi":"10.1080/00273171.2025.2561945","DOIUrl":"10.1080/00273171.2025.2561945","url":null,"abstract":"<p><p>Single-case experimental designs (SCEDs) involve repeated measurements of a small number of cases under different experimental conditions, offering valuable insights into treatment effects. However, challenges arise in the analysis of SCEDs when autocorrelation is present in the data. Recently, generalized linear mixed models (GLMMs) have emerged as a promising statistical approach for SCEDs with count outcomes. While prior research has demonstrated the effectiveness of GLMMs, these studies have typically assumed error independence, an assumption that may be violated in SCEDs due to serial dependency. This study aims to evaluate two possible solutions for autocorrelated SCED count data: 1) to assess the robustness of previously introduced GLMMs such as Poisson, negative binomial, and observation-level random effects models under various levels of autocorrelation, and 2) to evaluate the performance of a new GLMM and a linear mixed model (LMM), both of which incorporate an autoregressive error structure. Through a Monte Carlo simulation study, we have examined bias, coverage rates, and Type I error rates of treatment effect estimators, providing recommendations for handling autocorrelation in the analysis of SCED count data. A demonstration with real SCED count data is provided. The implications, limitations, and future research directions are also discussed.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"136-160"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145208454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-09-30DOI: 10.1080/00273171.2025.2561947
Alessandro Barbiero
It is a well-known fact that for the bivariate normal distribution the ratio between the point-polyserial correlation (the linear correlation after one of the two variables is discretized into k categories with probabilities ) and the polyserial correlation (the linear correlation between the two normal components) remains constant with keeping the 's fixed. If we move away from the bivariate normal distribution, by considering non-normal margins and/or non-normal dependence structures, then the constancy of this ratio may get lost. In this work, the magnitude of the departure from the constancy condition is assessed for several combinations of margins (normal, uniform, exponential, Weibull) and copulas (Gauss, Frank, Gumbel, Clayton), also varying the distribution of the discretized variable. The results indicate that for many settings we are far from the condition of constancy, especially when highly asymmetrical marginal distributions are combined with copulas that allow for tail-dependence. In such cases, the linear correlation may even increase instead of decreasing, contrary to the usual expectation. This implies that most existing simulation techniques or statistical models for mixed-type data, which assume a linear relationship between point-polyserial and polyserial correlations, should be used very prudently and possibly reappraised.
{"title":"On the Ratio Between Point-Polyserial and Polyserial Correlations for Non-Normal Bivariate Distributions.","authors":"Alessandro Barbiero","doi":"10.1080/00273171.2025.2561947","DOIUrl":"10.1080/00273171.2025.2561947","url":null,"abstract":"<p><p>It is a well-known fact that for the bivariate normal distribution the ratio between the point-polyserial correlation (the linear correlation after one of the two variables is discretized into <i>k</i> categories with probabilities <math><mrow><msub><mrow><mi>p</mi></mrow><mi>i</mi></msub></mrow><mtext>,</mtext></math> <math><mrow><mi>i</mi><mo>=</mo><mn>1</mn><mo>,</mo><mo>…</mo><mo>,</mo><mi>k</mi></mrow></math>) and the polyserial correlation <math><mrow><mi>ρ</mi></mrow></math> (the linear correlation between the two normal components) remains constant with <math><mrow><mi>ρ</mi></mrow><mtext>,</mtext></math> keeping the <math><mrow><msub><mrow><mi>p</mi></mrow><mi>i</mi></msub></mrow></math>'s fixed. If we move away from the bivariate normal distribution, by considering non-normal margins and/or non-normal dependence structures, then the constancy of this ratio may get lost. In this work, the magnitude of the departure from the constancy condition is assessed for several combinations of margins (normal, uniform, exponential, Weibull) and copulas (Gauss, Frank, Gumbel, Clayton), also varying the distribution of the discretized variable. The results indicate that for many settings we are far from the condition of constancy, especially when highly asymmetrical marginal distributions are combined with copulas that allow for tail-dependence. In such cases, the linear correlation may even increase instead of decreasing, contrary to the usual expectation. This implies that most existing simulation techniques or statistical models for mixed-type data, which assume a linear relationship between point-polyserial and polyserial correlations, should be used very prudently and possibly reappraised.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"161-177"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145202157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-22DOI: 10.1080/00273171.2025.2557275
Yuqi Liu, Zsuzsa Bakk, Ethan M McCormick, Mark de Rooij
Growth mixture models (GMMs) are popular approaches for modeling unobserved population heterogeneity over time. GMMs can be extended with covariates, predicting latent class (LC) membership, the within-class growth trajectories, or both. However, current estimators are sensitive to misspecifications in complex models. We propose extending the two-step estimator for LC models to GMMs, which provides robust estimation against model misspecifications (namely, ignored and overfitted the direct effects) for simpler LC models. We conducted several simulation studies, comparing the performance of the proposed two-step estimator to the commonly-used one- and three-step estimators. Three different population models were considered, including covariates that predicted only the LC membership (I), adding direct effects to the latent intercept (II), or to both growth factors (III). Results show that when predicting LC membership alone, all three estimators are unbiased when the measurement model is strong, with weak measurement model results being more nuanced. Alternatively, when including covariate effects on the growth factors, the two-step, and three-step estimators show consistent robustness against misspecifications with unbiased estimates across simulation conditions while tending to underestimate the standard error estimates while the one-step estimator is most sensitive to misspecifications.
{"title":"A Two-Step Estimator for Growth Mixture Models with Covariates in the Presence of Direct Effects.","authors":"Yuqi Liu, Zsuzsa Bakk, Ethan M McCormick, Mark de Rooij","doi":"10.1080/00273171.2025.2557275","DOIUrl":"10.1080/00273171.2025.2557275","url":null,"abstract":"<p><p>Growth mixture models (GMMs) are popular approaches for modeling unobserved population heterogeneity over time. GMMs can be extended with covariates, predicting latent class (LC) membership, the within-class growth trajectories, or both. However, current estimators are sensitive to misspecifications in complex models. We propose extending the two-step estimator for LC models to GMMs, which provides robust estimation against model misspecifications (namely, ignored and overfitted the direct effects) for simpler LC models. We conducted several simulation studies, comparing the performance of the proposed two-step estimator to the commonly-used one- and three-step estimators. Three different population models were considered, including covariates that predicted only the LC membership (I), adding direct effects to the latent intercept (II), or to both growth factors (III). Results show that when predicting LC membership alone, all three estimators are unbiased when the measurement model is strong, with weak measurement model results being more nuanced. Alternatively, when including covariate effects on the growth factors, the two-step, and three-step estimators show consistent robustness against misspecifications with unbiased estimates across simulation conditions while tending to underestimate the standard error estimates while the one-step estimator is most sensitive to misspecifications.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"52-73"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-10DOI: 10.1080/00273171.2025.2557274
Rebecca Kuiper, Ellen Hamaker
The appeal of lagged-effects models, like the first-order vector autoregressive (VAR(1)) model, is the interpretation of the lagged coefficients in terms of predictive-and possibly causal-relationships between variables over time. While the focus in VAR(1) applications has traditionally been on the strength and sign of the lagged relationships, there has been a growing interest in the residual relationships (i.e., the correlations between the innovations) as well. In this article, we will investigate what residual correlations can and cannot signal, for both the discrete-time (DT) and continuous-time (CT) VAR(1) model, when inspecting a CT process. We will show that one should not take on a DT perspective when investigating a CT process: Correlated (i.e., non-zero) DT residuals can flag omitted common causes and effects at shorter intervals (which is well-known), but-when having a CT process-also effects at longer intervals. Furthermore, when inspecting a CT process, uncorrelated (i.e., zero) DT residuals do not imply that the variables have no effect on each other at other intervals, nor does it preclude the risk of having omitted common causes. Additionally, we will show that residual correlations in a CT model signal omitted causes for one or more of the observed variables. This may bias the estimation of lagged relationships, implying that the found predictive lagged relationships do not equal the underlying causal lagged relationships. Unfortunately, the CT residual correlations do not reflect the magnitude of the distortion.
{"title":"Correlated Residuals in Lagged-Effects Models: What They (Do Not) Represent in the Case of a Continuous-Time Process.","authors":"Rebecca Kuiper, Ellen Hamaker","doi":"10.1080/00273171.2025.2557274","DOIUrl":"10.1080/00273171.2025.2557274","url":null,"abstract":"<p><p>The appeal of lagged-effects models, like the first-order vector autoregressive (VAR(1)) model, is the interpretation of the lagged coefficients in terms of predictive-and possibly causal-relationships between variables over time. While the focus in VAR(1) applications has traditionally been on the strength and sign of the lagged relationships, there has been a growing interest in the residual relationships (i.e., the correlations between the innovations) as well. In this article, we will investigate what residual correlations can and cannot signal, for both the discrete-time (DT) and continuous-time (CT) VAR(1) model, when inspecting a CT process. We will show that one should not take on a DT perspective when investigating a CT process: Correlated (i.e., non-zero) DT residuals can flag omitted common causes and effects at shorter intervals (which is well-known), but-when having a CT process-also effects at longer intervals. Furthermore, when inspecting a CT process, uncorrelated (i.e., zero) DT residuals do not imply that the variables have no effect on each other at other intervals, nor does it preclude the risk of having omitted common causes. Additionally, we will show that residual correlations in a CT model signal omitted causes for one or more of the observed variables. This may bias the estimation of lagged relationships, implying that the found predictive lagged relationships do not equal the underlying causal lagged relationships. Unfortunately, the CT residual correlations do not reflect the magnitude of the distortion.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"25-51"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145483732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-20DOI: 10.1080/00273171.2025.2561942
Christoph Jindra, Karoline A Sachse
State-of-the-art causal inference methods for observational data promise to relax assumptions threatening valid causal inference. Targeted maximum likelihood estimation (TMLE), for example, is a template for constructing doubly robust, semiparametric, efficient substitution estimators, providing consistent estimates if the outcome or treatment model is correctly specified. Compared to standard approaches, it reduces the risk of misspecification bias by allowing (nonparametric) machine-learning techniques, including super learning, to estimate the relevant components of the data distribution. We briefly introduce TMLE and demonstrate its use by estimating the effects of private tutoring in mathematics during Year 7 on mathematics proficiency and grades using observational data from starting cohort 3 of the National Education Panel Study ( 4,167). We contrast TMLE estimates to those from ordinary least squares, the parametric G-formula, and the augmented inverse-probability weighted estimator. Our findings reveal close agreement between methods for end-of-year grades. However, variations emerge when examining mathematics proficiency as the outcome, highlighting that substantive conclusions may depend on the analytical approach. The results underscore the significance of employing advanced causal inference methods, such as TMLE, when navigating the complexities of observational data and highlight the nuanced impact of methodological choices on the interpretation of study outcomes.
{"title":"Targeted Maximum Likelihood Estimation for Causal Inference With Observational Data-The Example of Private Tutoring.","authors":"Christoph Jindra, Karoline A Sachse","doi":"10.1080/00273171.2025.2561942","DOIUrl":"10.1080/00273171.2025.2561942","url":null,"abstract":"<p><p>State-of-the-art causal inference methods for observational data promise to relax assumptions threatening valid causal inference. Targeted maximum likelihood estimation (TMLE), for example, is a template for constructing doubly robust, semiparametric, efficient substitution estimators, providing consistent estimates if the outcome or treatment model is correctly specified. Compared to standard approaches, it reduces the risk of misspecification bias by allowing (nonparametric) machine-learning techniques, including super learning, to estimate the relevant components of the data distribution. We briefly introduce TMLE and demonstrate its use by estimating the effects of private tutoring in mathematics during Year 7 on mathematics proficiency and grades using observational data from starting cohort 3 of the National Education Panel Study (<math><mrow><mi>N</mi><mo>=</mo></mrow></math> 4,167). We contrast TMLE estimates to those from ordinary least squares, the parametric G-formula, and the augmented inverse-probability weighted estimator. Our findings reveal close agreement between methods for end-of-year grades. However, variations emerge when examining mathematics proficiency as the outcome, highlighting that substantive conclusions may depend on the analytical approach. The results underscore the significance of employing advanced causal inference methods, such as TMLE, when navigating the complexities of observational data and highlight the nuanced impact of methodological choices on the interpretation of study outcomes.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"74-93"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Numerous studies have shown that motor inhibition can be triggered automatically when the cognitive system encounters interfering stimuli, even a suspicious stimulus in the absence of perceptual awareness (e.g., the negative compatibility effect). This study investigated the effect of temporal expectation, a top-down active preparation for future events, on unconscious inhibitory processing both in the local expectation context on a trial-by-trial basis (Experiment 1) and in the global expectation context on a block-wise basis (Experiment 2). Modeling of the behavioral data using a drift-diffusion model showed that temporal expectation can accelerate the evidence accumulation and improve response caution, regardless of context. Importantly, the acceleration is lower when the target is consistent with the suspicious response tendency induced by the subliminal prime than when the target is inconsistent with that, which is significantly correlated with the behavioral RTs (i.e., the compatibility effect). The results provide evidence for a framework in which temporal expectation enhances inhibitory control of unconscious processes. The mechanism is likely to be that temporal expectation enhances the activations afforded by subliminal stimuli and the strength of cognitive monitoring, so that the cognitive system suppresses these suspicious activations more strongly, preventing them from escaping and interfering with subsequent processing.
{"title":"The Impact of Temporal Expectation on Unconscious Inhibitory Processing: A Computational Analysis Using Hierarchical Drift Diffusion Modeling.","authors":"Yongchun Wang, Jinlan Cao, Wandong Chen, Zhengqi Tang, Tingyi Liu, Zhen Mu, Peng Liu, Yonghui Wang","doi":"10.1080/00273171.2025.2561944","DOIUrl":"10.1080/00273171.2025.2561944","url":null,"abstract":"<p><p>Numerous studies have shown that motor inhibition can be triggered automatically when the cognitive system encounters interfering stimuli, even a suspicious stimulus in the absence of perceptual awareness (e.g., the negative compatibility effect). This study investigated the effect of temporal expectation, a top-down active preparation for future events, on unconscious inhibitory processing both in the local expectation context on a trial-by-trial basis (Experiment 1) and in the global expectation context on a block-wise basis (Experiment 2). Modeling of the behavioral data using a drift-diffusion model showed that temporal expectation can accelerate the evidence accumulation and improve response caution, regardless of context. Importantly, the acceleration is lower when the target is consistent with the suspicious response tendency induced by the subliminal prime than when the target is inconsistent with that, which is significantly correlated with the behavioral RTs (i.e., the compatibility effect). The results provide evidence for a framework in which temporal expectation enhances inhibitory control of unconscious processes. The mechanism is likely to be that temporal expectation enhances the activations afforded by subliminal stimuli and the strength of cognitive monitoring, so that the cognitive system suppresses these suspicious activations more strongly, preventing them from escaping and interfering with subsequent processing.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"116-135"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-21DOI: 10.1080/00273171.2025.2565591
Monica Morell, Muwon Kwon, Youngjin Han, Youjin Sung, Yang Liu, Ji Seung Yang
A regression discontinuity (RD) design is often employed to provide causal evidence when the randomization of the treatment assignment is infeasible. When variables of interest are latent constructs measured by observed indicators, the conventional RD analysis using observed variable scores does not allow researchers to examine heterogeneity in the estimated local average treatment effect (ATE) and to generalize the ATE to participants away from the cutoff. We propose a novel methodological augmentation to the conventional RD analysis, which assumes the availability of multiple indicator variables (i.e., raw item responses) that measure the latent construct underlying the running variable. By specifying an explicit measurement model based on those indicator variables, our latent RD framework allows 1) defining the local ATE conditional on the latent construct, 2) disentangling the heterogeneity of the local ATE, and 3) generalizing the local ATE to running variable scores away from the cutoff. In a proof-of-concept simulation we illustrate the proposed augmentation recovers parameters of interest well under practical test length and sample size conditions.
{"title":"Regression Discontinuity Analysis with Latent Variables.","authors":"Monica Morell, Muwon Kwon, Youngjin Han, Youjin Sung, Yang Liu, Ji Seung Yang","doi":"10.1080/00273171.2025.2565591","DOIUrl":"10.1080/00273171.2025.2565591","url":null,"abstract":"<p><p>A regression discontinuity (RD) design is often employed to provide causal evidence when the randomization of the treatment assignment is infeasible. When variables of interest are latent constructs measured by observed indicators, the conventional RD analysis using observed variable scores does not allow researchers to examine heterogeneity in the estimated local average treatment effect (ATE) and to generalize the ATE to participants away from the cutoff. We propose a novel methodological augmentation to the conventional RD analysis, which assumes the availability of multiple indicator variables (i.e., raw item responses) that measure the latent construct underlying the running variable. By specifying an explicit measurement model based on those indicator variables, our latent RD framework allows 1) defining the local ATE conditional on the latent construct, 2) disentangling the heterogeneity of the local ATE, and 3) generalizing the local ATE to running variable scores away from the cutoff. In a proof-of-concept simulation we illustrate the proposed augmentation recovers parameters of interest well under practical test length and sample size conditions.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"178-191"},"PeriodicalIF":3.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}