Pub Date : 2026-03-26DOI: 10.1080/00273171.2026.2643868
Katrin Jansen, Steffen Nestler
Often, primary studies that are pooled in a meta-analysis provide information on several outcomes of interest. Multivariate meta-analysis allows to analyze these outcomes simultaneously and model their relationship, and in addition can be more efficient than separate, univariate meta-analyses. However, standard multivariate meta-analysis models typically assume that the between-study variances and correlations are constant across studies. While it is possible to relax this assumption of constant heterogeneity by using location-scale models in univariate meta-analysis, extensions to the multivariate case have not yet been proposed. Here, we fill this gap by describing a location-scale model for the multivariate setting where both the between-study variances of the different outcomes and the correlations between them can depend on covariates. We examine its performance in a simulation study, where we compare univariate and bivariate location-scale models and different estimation methods. In addition, we show how to apply this model to data from a meta-analysis on the effects of motivational reading instruction on reading achievement and motivation. We discuss the implications of our findings for further research on meta-analysis of multiple outcomes and provide recommendations for the use of multivariate location-scale meta-analysis in applications.
{"title":"Multivariate Location-Scale Models for Meta-Analysis.","authors":"Katrin Jansen, Steffen Nestler","doi":"10.1080/00273171.2026.2643868","DOIUrl":"https://doi.org/10.1080/00273171.2026.2643868","url":null,"abstract":"<p><p>Often, primary studies that are pooled in a meta-analysis provide information on several outcomes of interest. Multivariate meta-analysis allows to analyze these outcomes simultaneously and model their relationship, and in addition can be more efficient than separate, univariate meta-analyses. However, standard multivariate meta-analysis models typically assume that the between-study variances and correlations are constant across studies. While it is possible to relax this assumption of constant heterogeneity by using location-scale models in univariate meta-analysis, extensions to the multivariate case have not yet been proposed. Here, we fill this gap by describing a location-scale model for the multivariate setting where both the between-study variances of the different outcomes and the correlations between them can depend on covariates. We examine its performance in a simulation study, where we compare univariate and bivariate location-scale models and different estimation methods. In addition, we show how to apply this model to data from a meta-analysis on the effects of motivational reading instruction on reading achievement and motivation. We discuss the implications of our findings for further research on meta-analysis of multiple outcomes and provide recommendations for the use of multivariate location-scale meta-analysis in applications.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-19"},"PeriodicalIF":3.5,"publicationDate":"2026-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147516340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-18DOI: 10.1080/00273171.2026.2634294
Diego Iglesias, Miguel A Sorrel, Ricardo Olmos
Multilevel Models (MLMs) have become a valuable tool in the behavioral and social sciences, providing a framework for analyzing clustered data structures commonly encountered in these fields. Unlike single-level regression, measures in MLMs become more intricate due to the need to account for sources of variance at different levels. Recently, Rights and Sterba (2019) introduced an integrative framework of MLM measures, providing a unifying approach to interpreting MLM measures in relation to specific substantive questions. While this framework represents a valuable resource for applied research, the measures have been defined in the population, and their performance across various conditions reflecting applied MLM practices remains unexplored. The present study evaluates the performance of the different MLM measures as estimators of their population values through Monte Carlo simulations. Among other factors, we examined how the number of level-1 and level-2 predictors, cross-level interactions, and random slopes affect the accuracy of the corresponding MLM measures. Results indicate that as the number of level-2 predictors increases, a greater number of clusters is required to ensure accurate estimates. The greater the number of level-1 predictors, cross-level interactions, and random slopes, increasing either the number of clusters or the number of observations per cluster leads to more accurate estimates.
多层模型(MLMs)已经成为行为科学和社会科学中一个有价值的工具,为分析这些领域中经常遇到的聚类数据结构提供了一个框架。与单水平回归不同,传销中的R2测量由于需要考虑不同水平的方差来源而变得更加复杂。最近,Rights and Sterba(2019)引入了传销R2措施的综合框架,为解释与具体实质性问题相关的传销R2措施提供了统一的方法。虽然该框架代表了应用研究的宝贵资源,但R2指标已在人群中定义,其在各种条件下反映应用传销实践的表现仍未得到探索。本研究通过蒙特卡罗模拟评估了不同传销R2测量作为其人口值估计器的性能。在其他因素中,我们研究了一级和二级预测因子的数量、跨水平相互作用和随机斜率如何影响相应MLM R2测量的准确性。结果表明,随着二级预测因子数量的增加,需要更多的聚类来确保准确的估计。一级预测因子、跨层相互作用和随机斜率的数量越多,增加集群的数量或每个集群的观察数量会导致更准确的估计。
{"title":"Evaluating the Performance of R-Squared Measures in Multilevel Models.","authors":"Diego Iglesias, Miguel A Sorrel, Ricardo Olmos","doi":"10.1080/00273171.2026.2634294","DOIUrl":"https://doi.org/10.1080/00273171.2026.2634294","url":null,"abstract":"<p><p>Multilevel Models (MLMs) have become a valuable tool in the behavioral and social sciences, providing a framework for analyzing clustered data structures commonly encountered in these fields. Unlike single-level regression, <math><mrow><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></math> measures in MLMs become more intricate due to the need to account for sources of variance at different levels. Recently, Rights and Sterba (2019) introduced an integrative framework of MLM <math><mrow><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></math> measures, providing a unifying approach to interpreting MLM <math><mrow><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></math> measures in relation to specific substantive questions. While this framework represents a valuable resource for applied research, the <math><mrow><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></math> measures have been defined in the population, and their performance across various conditions reflecting applied MLM practices remains unexplored. The present study evaluates the performance of the different MLM <math><mrow><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></math> measures as estimators of their population values through Monte Carlo simulations. Among other factors, we examined how the number of level-1 and level-2 predictors, cross-level interactions, and random slopes affect the accuracy of the corresponding MLM <math><mrow><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></math> measures. Results indicate that as the number of level-2 predictors increases, a greater number of clusters is required to ensure accurate estimates. The greater the number of level-1 predictors, cross-level interactions, and random slopes, increasing either the number of clusters or the number of observations per cluster leads to more accurate estimates.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-17"},"PeriodicalIF":3.5,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147475906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-16DOI: 10.1080/00273171.2026.2619211
James Ohisei Uanhoro, Megan Rojo
We present a hierarchical ordinal model for analyzing single-case designs (SCDs), with a focus on treatment-reversal designs. SCDs involve systematic measurement of outcomes for individual cases across different conditions or phases, aiming to establish causal relations between interventions and behavioral changes. While visual analysis is a common approach in SCDs, the field is increasingly adopting quantitative effect size metrics, such as non-overlap indices, to supplement visual examination. However, statistical theory supporting the use of these indices remains underdeveloped. To address this gap, we developed a Bayesian hierarchical ordinal model that enables the estimation of case-specific non-overlap indices. Through simulation studies, we demonstrate that these indices are more accurate than those obtained via standard approaches. Moreover, the model can generate parametric indices with greater accuracy than standard methods. To facilitate the adoption of this model, we provide an R package (ssrhom) for model estimation. This contribution aims to enhance the analysis and interpretation of SCDs, ultimately advancing our understanding of the efficacy of interventions and promoting evidence-based decision-making.
{"title":"A Hierarchical Ordinal Regression Model for Treatment-Reversal Designs with Application to Non-Overlap Effect Sizes.","authors":"James Ohisei Uanhoro, Megan Rojo","doi":"10.1080/00273171.2026.2619211","DOIUrl":"https://doi.org/10.1080/00273171.2026.2619211","url":null,"abstract":"<p><p>We present a hierarchical ordinal model for analyzing single-case designs (SCDs), with a focus on treatment-reversal designs. SCDs involve systematic measurement of outcomes for individual cases across different conditions or phases, aiming to establish causal relations between interventions and behavioral changes. While visual analysis is a common approach in SCDs, the field is increasingly adopting quantitative effect size metrics, such as non-overlap indices, to supplement visual examination. However, statistical theory supporting the use of these indices remains underdeveloped. To address this gap, we developed a Bayesian hierarchical ordinal model that enables the estimation of case-specific non-overlap indices. Through simulation studies, we demonstrate that these indices are more accurate than those obtained <i>via</i> standard approaches. Moreover, the model can generate parametric indices with greater accuracy than standard methods. To facilitate the adoption of this model, we provide an R package (<i>ssrhom</i>) for model estimation. This contribution aims to enhance the analysis and interpretation of SCDs, ultimately advancing our understanding of the efficacy of interventions and promoting evidence-based decision-making.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-28"},"PeriodicalIF":3.5,"publicationDate":"2026-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147470303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-09DOI: 10.1080/00273171.2026.2636166
Remus Mitchell, Craig K Enders, Yi Feng
It is routinely recommended that level-1 variables in multilevel models be disaggregated when they are of substantive importance. Yet, the consensus on the disaggregation of level-1 covariates is more mixed. Disaggregation clarifies interpretation and reduces bias in the covariate, though some methodologists argue that it is unnecessary when the covariate itself is not of substantive interest. Our study builds off recent work to explore the tradeoffs between bias and precision when choosing to disaggregate level-1 covariates when the primary interest lies in a level-2 predictor. Using a Monte Carlo simulation, we examine how factors such as the intraclass correlation, the magnitude of the contextual effect, the within- and between-level effect sizes, the correlation among level-2 effects, sample size at both levels, and the method of disaggregation (manifest versus latent) influence bias, precision, and power of a level-2 focal estimate. Our findings suggest that although disaggregation generally improves interpretability and reduces bias, there are conditions where a non-disaggregated approach may yield greater precision. These insights inform best practices for handling lower-level covariates in multilevel models.It is routinely recommended that level-1 variables in multilevel models be disaggregated when they are of substantive importance. Yet, the consensus on the disaggregation of level-1 covariates is more mixed. Disaggregation clarifies interpretation and reduces bias in the covariate, though some methodologists argue that it is unnecessary when the covariate itself is not of substantive interest. Our study builds off the work of Rights et al. to explore the tradeoffs between bias and precision when choosing to disaggregate level-1 covariates when the primary interest lies in a level-2 predictor. Using a Monte Carlo simulation, we examine how factors such as the intraclass correlation, the magnitude of the contextual effect, the within- and between-level effect sizes, the correlation among level-2 effects, sample size at both levels, and the method of disaggregation (manifest versus latent) influence bias, precision, and power of a level-2 focal estimate. Our findings suggest that although disaggregation generally improves interpretability and reduces bias, there are conditions where a non-disaggregated approach may yield greater precision. These insights inform best practices for handling lower-level covariates in multilevel models.
{"title":"To Disaggregate or Not to Disaggregate: A Focus on Covariates in Multilevel Models.","authors":"Remus Mitchell, Craig K Enders, Yi Feng","doi":"10.1080/00273171.2026.2636166","DOIUrl":"https://doi.org/10.1080/00273171.2026.2636166","url":null,"abstract":"<p><p>It is routinely recommended that level-1 variables in multilevel models be disaggregated when they are of substantive importance. Yet, the consensus on the disaggregation of level-1 covariates is more mixed. Disaggregation clarifies interpretation and reduces bias in the covariate, though some methodologists argue that it is unnecessary when the covariate itself is not of substantive interest. Our study builds off recent work to explore the tradeoffs between bias and precision when choosing to disaggregate level-1 covariates when the primary interest lies in a level-2 predictor. Using a Monte Carlo simulation, we examine how factors such as the intraclass correlation, the magnitude of the contextual effect, the within- and between-level effect sizes, the correlation among level-2 effects, sample size at both levels, and the method of disaggregation (manifest versus latent) influence bias, precision, and power of a level-2 focal estimate. Our findings suggest that although disaggregation generally improves interpretability and reduces bias, there are conditions where a non-disaggregated approach may yield greater precision. These insights inform best practices for handling lower-level covariates in multilevel models.It is routinely recommended that level-1 variables in multilevel models be disaggregated when they are of substantive importance. Yet, the consensus on the disaggregation of level-1 covariates is more mixed. Disaggregation clarifies interpretation and reduces bias in the covariate, though some methodologists argue that it is unnecessary when the covariate itself is not of substantive interest. Our study builds off the work of Rights et al. to explore the tradeoffs between bias and precision when choosing to disaggregate level-1 covariates when the primary interest lies in a level-2 predictor. Using a Monte Carlo simulation, we examine how factors such as the intraclass correlation, the magnitude of the contextual effect, the within- and between-level effect sizes, the correlation among level-2 effects, sample size at both levels, and the method of disaggregation (manifest versus latent) influence bias, precision, and power of a level-2 focal estimate. Our findings suggest that although disaggregation generally improves interpretability and reduces bias, there are conditions where a non-disaggregated approach may yield greater precision. These insights inform best practices for handling lower-level covariates in multilevel models.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-18"},"PeriodicalIF":3.5,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147379227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-02DOI: 10.1080/00273171.2026.2634993
Xiao Liu, J Mark Eddy, Charles R Martinez
Subgroup analysis is an important tool for studying treatment effect moderation. However, when a subgroup has a relatively small proportion (referred to as "focal subgroup"), standard subgroup analysis could encounter practical difficulties (e.g., low estimation precision). In this study, we propose an incremental subgroup analysis approach, which considers how the treatment effect would change as the proportion of focal subgroup gradually increases. The proposed approach provides estimates and confidence intervals for incremental subgroup effects, allowing visualization of the effect moderation trend with a continuous curve along with the corresponding confidence band. For estimation with baseline covariates, we extend a doubly robust method that can incorporate machine learning approaches for relaxing modeling assumptions, while allowing quantification of uncertainty for the effect estimate (e.g., via confidence intervals). Simulations are conducted to evaluate the performance of the estimation method. We illustrate the application of the proposed approach in an empirical example, assessing the moderation in the effect of a preventive intervention based on a relatively small subgroup. We hope that the proposed subgroup analysis approach provides an alternative or complementary method for studying effect moderation by subgroups.
{"title":"Treatment Effect Moderation with Small Subgroups: An Incremental Subgroup Analysis Approach.","authors":"Xiao Liu, J Mark Eddy, Charles R Martinez","doi":"10.1080/00273171.2026.2634993","DOIUrl":"https://doi.org/10.1080/00273171.2026.2634993","url":null,"abstract":"<p><p>Subgroup analysis is an important tool for studying treatment effect moderation. However, when a subgroup has a relatively small proportion (referred to as \"focal subgroup\"), standard subgroup analysis could encounter practical difficulties (e.g., low estimation precision). In this study, we propose an incremental subgroup analysis approach, which considers how the treatment effect would change as the proportion of focal subgroup gradually increases. The proposed approach provides estimates and confidence intervals for incremental subgroup effects, allowing visualization of the effect moderation trend with a continuous curve along with the corresponding confidence band. For estimation with baseline covariates, we extend a doubly robust method that can incorporate machine learning approaches for relaxing modeling assumptions, while allowing quantification of uncertainty for the effect estimate (e.g., via confidence intervals). Simulations are conducted to evaluate the performance of the estimation method. We illustrate the application of the proposed approach in an empirical example, assessing the moderation in the effect of a preventive intervention based on a relatively small subgroup. We hope that the proposed subgroup analysis approach provides an alternative or complementary method for studying effect moderation by subgroups.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-16"},"PeriodicalIF":3.5,"publicationDate":"2026-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147328125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-02DOI: 10.1080/00273171.2026.2634293
Shu Fai Cheung, Mark H C Lai
Measuring case influence on parameter estimates and model fit measures, which is one type of sensitivity analysis, is important for assessing the robustness of findings in structural equation modeling (SEM). However, it was rarely reported clearly or was conducted inappropriately, mistaking outlier detection for influential cases assessment. Some existing tools have limitations in the models or estimation methods they support, or in the types of influence measures that can be computed. We developed an easy-to-use R package, semfindr, for identifying influential cases in SEM using the leave-one-out (LOO) method. It reduces the computational cost by separating the refitting step from the case influence computation step. It also has various plot functions for effective assessment of case influence in complicated models. Lastly, it supports multiple-group models and the handling of missing data. This manuscript demonstrates how to utilize semfindr for efficient search for influential cases, providing publication-ready results and plots.
{"title":"semfindr: An R Package for Identifying Influential Cases in Structural Equation Modeling.","authors":"Shu Fai Cheung, Mark H C Lai","doi":"10.1080/00273171.2026.2634293","DOIUrl":"https://doi.org/10.1080/00273171.2026.2634293","url":null,"abstract":"<p><p>Measuring case influence on parameter estimates and model fit measures, which is one type of sensitivity analysis, is important for assessing the robustness of findings in structural equation modeling (SEM). However, it was rarely reported clearly or was conducted inappropriately, mistaking outlier detection for influential cases assessment. Some existing tools have limitations in the models or estimation methods they support, or in the types of influence measures that can be computed. We developed an easy-to-use R package, semfindr, for identifying influential cases in SEM using the leave-one-out (LOO) method. It reduces the computational cost by separating the refitting step from the case influence computation step. It also has various plot functions for effective assessment of case influence in complicated models. Lastly, it supports multiple-group models and the handling of missing data. This manuscript demonstrates how to utilize semfindr for efficient search for influential cases, providing publication-ready results and plots.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-9"},"PeriodicalIF":3.5,"publicationDate":"2026-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147328092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-20DOI: 10.1080/00273171.2026.2622120
Christopher M Crawford, Jonathan J Park, Sy-Miin Chow, Anja F Ernst, Vladas Pipiras, Zachary F Fisher
Interest in the study and analysis of dynamic processes in the social, behavioral, and health sciences has burgeoned in recent years due to the increased availability of intensive longitudinal data. However, how best to model and account for the persistent heterogeneity characterizing such processes remains an open question. The multi-VAR framework, a recent methodological development built on the vector autoregressive model, accommodates heterogeneous dynamics in multiple-subject time series through structured penalization. In the original multi-VAR proposal, individual-level transition matrices are decomposed into common and unique dynamics, allowing for generalizable and person-specific features. The current project extends this framework to allow additionally for the identification and penalized estimation of subgroup-specific dynamics; that is, patterns of dynamics that are shared across subsets of individuals. The performance of the proposed subgrouping extension is evaluated in the context of both a simulation study and empirical application, and results are compared to alternative methods for subgrouping multiple-subject, multivariate time series.
{"title":"Penalized Subgrouping of Heterogeneous Time Series.","authors":"Christopher M Crawford, Jonathan J Park, Sy-Miin Chow, Anja F Ernst, Vladas Pipiras, Zachary F Fisher","doi":"10.1080/00273171.2026.2622120","DOIUrl":"https://doi.org/10.1080/00273171.2026.2622120","url":null,"abstract":"<p><p>Interest in the study and analysis of dynamic processes in the social, behavioral, and health sciences has burgeoned in recent years due to the increased availability of intensive longitudinal data. However, how best to model and account for the persistent heterogeneity characterizing such processes remains an open question. The multi-VAR framework, a recent methodological development built on the vector autoregressive model, accommodates heterogeneous dynamics in multiple-subject time series through structured penalization. In the original multi-VAR proposal, individual-level transition matrices are decomposed into common and unique dynamics, allowing for generalizable and person-specific features. The current project extends this framework to allow additionally for the identification and penalized estimation of subgroup-specific dynamics; that is, patterns of dynamics that are shared across subsets of individuals. The performance of the proposed subgrouping extension is evaluated in the context of both a simulation study and empirical application, and results are compared to alternative methods for subgrouping multiple-subject, multivariate time series.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-24"},"PeriodicalIF":3.5,"publicationDate":"2026-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146259927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1080/00273171.2025.2612035
Sijia Li, Victoria Savalei
Confirmatory bifactor models have been widely applied to understand multidimensional constructs in different areas of psychology research. Maximal reliability captures how well an optimal linear composite (OLC) represents the target latent variable. In this article, we point out that researchers have been using an incorrect generalization of coefficient H, a maximal reliability coefficient developed for one-factor models, with bifactor models. We present two sets of correct equations for maximal reliability: one based on an OLC for the entire scale and one based on a sub-composite consisting only of relevant items (OLSC). We illustrate these equations on a simulated data example and on a real data example, and compare them to other reliability coefficients. In a small population simulation, we find that OLCs and OLSCs are not reliable measures of group factors in models that contain fewer than 100 indicators. In addition, somewhat unexpectedly, we find that OLCs and OLSCs often receive negative weights. Overall, we recommend against using optimal composites or sub-composites as proxies for group factors, due to poor reliability and difficulties of interpretation. However, maximal reliability indices can be reported to evaluate the quality of a bifactor model.
{"title":"Calculating and Interpreting Maximal Reliability in Bifactor Models.","authors":"Sijia Li, Victoria Savalei","doi":"10.1080/00273171.2025.2612035","DOIUrl":"https://doi.org/10.1080/00273171.2025.2612035","url":null,"abstract":"<p><p>Confirmatory bifactor models have been widely applied to understand multidimensional constructs in different areas of psychology research. Maximal reliability captures how well an optimal linear composite (OLC) represents the target latent variable. In this article, we point out that researchers have been using an incorrect generalization of coefficient H, a maximal reliability coefficient developed for one-factor models, with bifactor models. We present two sets of correct equations for maximal reliability: one based on an OLC for the entire scale and one based on a sub-composite consisting only of relevant items (OLSC). We illustrate these equations on a simulated data example and on a real data example, and compare them to other reliability coefficients. In a small population simulation, we find that OLCs and OLSCs are not reliable measures of group factors in models that contain fewer than 100 indicators. In addition, somewhat unexpectedly, we find that OLCs and OLSCs often receive negative weights. Overall, we recommend against using optimal composites or sub-composites as proxies for group factors, due to poor reliability and difficulties of interpretation. However, maximal reliability indices can be reported to evaluate the quality of a bifactor model.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-22"},"PeriodicalIF":3.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146119833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1080/00273171.2025.2606868
Joost R van Ginkel, Dylan Molenaar
In moderated factor analysis, the parameters of the traditional common factor model are a function of an external continuous moderator variable. Handling missing values on the observed indicator variables of the common factors is straightforward as the parameters can be estimated using full information maximum likelihood. However, for cases with missing values on the moderator variable the likelihood function cannot be evaluated. Consequently, in practical applications of the moderated factor model, these cases are omitted from the analysis by listwise deletion. As listwise deletion is known to potentially affect the consistency and precision of the results, we propose a moderated factor model based multiple imputation procedure for handling missing values on the moderator variable in the presence of missing values on the indicator variables. We compare this new procedure with listwise deletion and predictive mean matching. The results show that both listwise deletion and predictive mean matching have less power and produce more bias in parameter estimates than multiple imputation under the moderated factor model.
{"title":"Multiple Imputation of Missing Data in Moderated Factor Analysis.","authors":"Joost R van Ginkel, Dylan Molenaar","doi":"10.1080/00273171.2025.2606868","DOIUrl":"https://doi.org/10.1080/00273171.2025.2606868","url":null,"abstract":"<p><p>In moderated factor analysis, the parameters of the traditional common factor model are a function of an external continuous moderator variable. Handling missing values on the observed indicator variables of the common factors is straightforward as the parameters can be estimated using full information maximum likelihood. However, for cases with missing values on the moderator variable the likelihood function cannot be evaluated. Consequently, in practical applications of the moderated factor model, these cases are omitted from the analysis by listwise deletion. As listwise deletion is known to potentially affect the consistency and precision of the results, we propose a moderated factor model based multiple imputation procedure for handling missing values on the moderator variable in the presence of missing values on the indicator variables. We compare this new procedure with listwise deletion and predictive mean matching. The results show that both listwise deletion and predictive mean matching have less power and produce more bias in parameter estimates than multiple imputation under the moderated factor model.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-17"},"PeriodicalIF":3.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1080/00273171.2026.2615659
Yajnaseni Chakraborti, Recai M Yucel, Megan E Piper, Jeremy Mennis, Anthony J Alberg, Timothy B Baker, Donna L Coffman
Behavioral processes are often complex, and vary over time, requiring intensive longitudinal data to effectively capture the dynamic elements involved. For example, examining daily socio-behavioral and treatment adherence data collected during a smoking quit attempt, can reveal how, when, and why withdrawal symptoms change, offering insight into critical windows of relapse-risk in the cessation process. However, analytical methods (e.g., time-varying causal mediation methods), that can translate such intensive longitudinal data into time-varying causal effects remain limited, hindering a deeper understanding of these dynamic behavioral processes. We propose a new approach, augmented mediational g-formula with a two-step estimation strategy, to estimate time-varying causal (in)direct effects. Its performance was evaluated via simulation, comparing bias, precision, and alignment with the product-of-coefficients approach. The optimal approach identified by the simulation study was applied to data from the Wisconsin Smokers' Health Study II, for assessing the effect of randomized pharmacological treatment assignment (exposure) on daily smoking cessation outcome(s), mediated via daily treatment adherence, in the presence of a time-varying confounder (daily stress). Daily stress was due to social contextual factors but not affected by the exposure. Within its scope, this study serves as a preliminary framework for studying the causal structure of time-varying bio-behavioral processes.
{"title":"Time-Varying Path-Specific Direct and Indirect Effects: A Novel Approach to Examine Dynamic Behavioral Processes with Application to Smoking Cessation.","authors":"Yajnaseni Chakraborti, Recai M Yucel, Megan E Piper, Jeremy Mennis, Anthony J Alberg, Timothy B Baker, Donna L Coffman","doi":"10.1080/00273171.2026.2615659","DOIUrl":"https://doi.org/10.1080/00273171.2026.2615659","url":null,"abstract":"<p><p>Behavioral processes are often complex, and vary over time, requiring intensive longitudinal data to effectively capture the dynamic elements involved. For example, examining daily socio-behavioral and treatment adherence data collected during a smoking quit attempt, can reveal how, when, and why withdrawal symptoms change, offering insight into critical windows of relapse-risk in the cessation process. However, analytical methods (e.g., time-varying causal mediation methods), that can translate such intensive longitudinal data into time-varying causal effects remain limited, hindering a deeper understanding of these dynamic behavioral processes. We propose a new approach, augmented mediational g-formula with a two-step estimation strategy, to estimate time-varying causal (in)direct effects. Its performance was evaluated <i>via</i> simulation, comparing bias, precision, and alignment with the product-of-coefficients approach. The optimal approach identified by the simulation study was applied to data from the Wisconsin Smokers' Health Study II, for assessing the effect of randomized pharmacological treatment assignment (exposure) on daily smoking cessation outcome(s), mediated <i>via</i> daily treatment adherence, in the presence of a time-varying confounder (daily stress). Daily stress was due to social contextual factors but not affected by the exposure. Within its scope, this study serves as a preliminary framework for studying the causal structure of time-varying bio-behavioral processes.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-19"},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146013010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}