Pub Date : 2026-02-04DOI: 10.1080/00273171.2025.2612035
Sijia Li, Victoria Savalei
Confirmatory bifactor models have been widely applied to understand multidimensional constructs in different areas of psychology research. Maximal reliability captures how well an optimal linear composite (OLC) represents the target latent variable. In this article, we point out that researchers have been using an incorrect generalization of coefficient H, a maximal reliability coefficient developed for one-factor models, with bifactor models. We present two sets of correct equations for maximal reliability: one based on an OLC for the entire scale and one based on a sub-composite consisting only of relevant items (OLSC). We illustrate these equations on a simulated data example and on a real data example, and compare them to other reliability coefficients. In a small population simulation, we find that OLCs and OLSCs are not reliable measures of group factors in models that contain fewer than 100 indicators. In addition, somewhat unexpectedly, we find that OLCs and OLSCs often receive negative weights. Overall, we recommend against using optimal composites or sub-composites as proxies for group factors, due to poor reliability and difficulties of interpretation. However, maximal reliability indices can be reported to evaluate the quality of a bifactor model.
{"title":"Calculating and Interpreting Maximal Reliability in Bifactor Models.","authors":"Sijia Li, Victoria Savalei","doi":"10.1080/00273171.2025.2612035","DOIUrl":"https://doi.org/10.1080/00273171.2025.2612035","url":null,"abstract":"<p><p>Confirmatory bifactor models have been widely applied to understand multidimensional constructs in different areas of psychology research. Maximal reliability captures how well an optimal linear composite (OLC) represents the target latent variable. In this article, we point out that researchers have been using an incorrect generalization of coefficient H, a maximal reliability coefficient developed for one-factor models, with bifactor models. We present two sets of correct equations for maximal reliability: one based on an OLC for the entire scale and one based on a sub-composite consisting only of relevant items (OLSC). We illustrate these equations on a simulated data example and on a real data example, and compare them to other reliability coefficients. In a small population simulation, we find that OLCs and OLSCs are not reliable measures of group factors in models that contain fewer than 100 indicators. In addition, somewhat unexpectedly, we find that OLCs and OLSCs often receive negative weights. Overall, we recommend against using optimal composites or sub-composites as proxies for group factors, due to poor reliability and difficulties of interpretation. However, maximal reliability indices can be reported to evaluate the quality of a bifactor model.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-22"},"PeriodicalIF":3.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146119833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1080/00273171.2025.2606868
Joost R van Ginkel, Dylan Molenaar
In moderated factor analysis, the parameters of the traditional common factor model are a function of an external continuous moderator variable. Handling missing values on the observed indicator variables of the common factors is straightforward as the parameters can be estimated using full information maximum likelihood. However, for cases with missing values on the moderator variable the likelihood function cannot be evaluated. Consequently, in practical applications of the moderated factor model, these cases are omitted from the analysis by listwise deletion. As listwise deletion is known to potentially affect the consistency and precision of the results, we propose a moderated factor model based multiple imputation procedure for handling missing values on the moderator variable in the presence of missing values on the indicator variables. We compare this new procedure with listwise deletion and predictive mean matching. The results show that both listwise deletion and predictive mean matching have less power and produce more bias in parameter estimates than multiple imputation under the moderated factor model.
{"title":"Multiple Imputation of Missing Data in Moderated Factor Analysis.","authors":"Joost R van Ginkel, Dylan Molenaar","doi":"10.1080/00273171.2025.2606868","DOIUrl":"https://doi.org/10.1080/00273171.2025.2606868","url":null,"abstract":"<p><p>In moderated factor analysis, the parameters of the traditional common factor model are a function of an external continuous moderator variable. Handling missing values on the observed indicator variables of the common factors is straightforward as the parameters can be estimated using full information maximum likelihood. However, for cases with missing values on the moderator variable the likelihood function cannot be evaluated. Consequently, in practical applications of the moderated factor model, these cases are omitted from the analysis by listwise deletion. As listwise deletion is known to potentially affect the consistency and precision of the results, we propose a moderated factor model based multiple imputation procedure for handling missing values on the moderator variable in the presence of missing values on the indicator variables. We compare this new procedure with listwise deletion and predictive mean matching. The results show that both listwise deletion and predictive mean matching have less power and produce more bias in parameter estimates than multiple imputation under the moderated factor model.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-17"},"PeriodicalIF":3.5,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1080/00273171.2026.2615659
Yajnaseni Chakraborti, Recai M Yucel, Megan E Piper, Jeremy Mennis, Anthony J Alberg, Timothy B Baker, Donna L Coffman
Behavioral processes are often complex, and vary over time, requiring intensive longitudinal data to effectively capture the dynamic elements involved. For example, examining daily socio-behavioral and treatment adherence data collected during a smoking quit attempt, can reveal how, when, and why withdrawal symptoms change, offering insight into critical windows of relapse-risk in the cessation process. However, analytical methods (e.g., time-varying causal mediation methods), that can translate such intensive longitudinal data into time-varying causal effects remain limited, hindering a deeper understanding of these dynamic behavioral processes. We propose a new approach, augmented mediational g-formula with a two-step estimation strategy, to estimate time-varying causal (in)direct effects. Its performance was evaluated via simulation, comparing bias, precision, and alignment with the product-of-coefficients approach. The optimal approach identified by the simulation study was applied to data from the Wisconsin Smokers' Health Study II, for assessing the effect of randomized pharmacological treatment assignment (exposure) on daily smoking cessation outcome(s), mediated via daily treatment adherence, in the presence of a time-varying confounder (daily stress). Daily stress was due to social contextual factors but not affected by the exposure. Within its scope, this study serves as a preliminary framework for studying the causal structure of time-varying bio-behavioral processes.
{"title":"Time-Varying Path-Specific Direct and Indirect Effects: A Novel Approach to Examine Dynamic Behavioral Processes with Application to Smoking Cessation.","authors":"Yajnaseni Chakraborti, Recai M Yucel, Megan E Piper, Jeremy Mennis, Anthony J Alberg, Timothy B Baker, Donna L Coffman","doi":"10.1080/00273171.2026.2615659","DOIUrl":"https://doi.org/10.1080/00273171.2026.2615659","url":null,"abstract":"<p><p>Behavioral processes are often complex, and vary over time, requiring intensive longitudinal data to effectively capture the dynamic elements involved. For example, examining daily socio-behavioral and treatment adherence data collected during a smoking quit attempt, can reveal how, when, and why withdrawal symptoms change, offering insight into critical windows of relapse-risk in the cessation process. However, analytical methods (e.g., time-varying causal mediation methods), that can translate such intensive longitudinal data into time-varying causal effects remain limited, hindering a deeper understanding of these dynamic behavioral processes. We propose a new approach, augmented mediational g-formula with a two-step estimation strategy, to estimate time-varying causal (in)direct effects. Its performance was evaluated <i>via</i> simulation, comparing bias, precision, and alignment with the product-of-coefficients approach. The optimal approach identified by the simulation study was applied to data from the Wisconsin Smokers' Health Study II, for assessing the effect of randomized pharmacological treatment assignment (exposure) on daily smoking cessation outcome(s), mediated <i>via</i> daily treatment adherence, in the presence of a time-varying confounder (daily stress). Daily stress was due to social contextual factors but not affected by the exposure. Within its scope, this study serves as a preliminary framework for studying the causal structure of time-varying bio-behavioral processes.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-19"},"PeriodicalIF":3.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146013010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1080/00273171.2026.2613311
Ethan M McCormick
There has been a growing interest in using earlier change to predict downstream distal outcomes in development; however, prior work has mostly focused on estimating the unique effect of the different growth parameters (e.g., intercept and slope) rather than focusing on the trajectory as a whole. Here I lay out a distal outcome latent curve model with latent interactions which attempts to model the joint effect of growth parameters on these later outcomes. I show again that these models require us to contend with unintuitive time coding effects which can impact the direction and significance of effects and that plotting and probing are necessary for disambiguating these joint effects. These graphical approaches emphasize practical steps for applied researchers in understanding these effects. I then outline how future research can help clarify optimal approaches for using the trajectory as a whole rather than the unique effects of its individual sub-components.
{"title":"Moderating the Consequences of Longitudinal Change for Distal Outcomes.","authors":"Ethan M McCormick","doi":"10.1080/00273171.2026.2613311","DOIUrl":"https://doi.org/10.1080/00273171.2026.2613311","url":null,"abstract":"<p><p>There has been a growing interest in using earlier change to predict downstream distal outcomes in development; however, prior work has mostly focused on estimating the unique effect of the different growth parameters (e.g., intercept and slope) rather than focusing on the trajectory as a whole. Here I lay out a distal outcome latent curve model with latent interactions which attempts to model the <i>joint</i> effect of growth parameters on these later outcomes. I show again that these models require us to contend with unintuitive time coding effects which can impact the direction and significance of effects and that plotting and probing are necessary for disambiguating these joint effects. These graphical approaches emphasize practical steps for applied researchers in understanding these effects. I then outline how future research can help clarify optimal approaches for using the trajectory as a whole rather than the unique effects of its individual sub-components.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-19"},"PeriodicalIF":3.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1080/00273171.2025.2605678
Ludovica De Carolis, Inhan Kang, Minjeong Jeon
In this study, we introduce a novel modeling approach for ordinal response data, extending the one-parameter graded response model. The proposed model incorporates unobserved interactions between respondents and items, represented as distances in a two-dimensional Euclidean space, referred to as an interaction map. This latent space graded response model (LSGRM) addresses potential violations of the conditional independence assumption shared by traditional main-effect-only psychometric models and offers a visualization tool for exploring conditional dependence in ordinal item response data. Through simulation and empirical studies, we illustrate the utility of the proposed approach in analyzing Likert-scale psychological assessment data. Also, by comparing the results with those from other models of different data modalities, we examined the impact of dichotomization and treating ordinal responses as continuous on conditional dependence.
{"title":"A Latent Space Graded Response Model for Likert-Scale Psychological Assessments.","authors":"Ludovica De Carolis, Inhan Kang, Minjeong Jeon","doi":"10.1080/00273171.2025.2605678","DOIUrl":"https://doi.org/10.1080/00273171.2025.2605678","url":null,"abstract":"<p><p>In this study, we introduce a novel modeling approach for ordinal response data, extending the one-parameter graded response model. The proposed model incorporates unobserved interactions between respondents and items, represented as distances in a two-dimensional Euclidean space, referred to as an interaction map. This latent space graded response model (LSGRM) addresses potential violations of the conditional independence assumption shared by traditional main-effect-only psychometric models and offers a visualization tool for exploring conditional dependence in ordinal item response data. Through simulation and empirical studies, we illustrate the utility of the proposed approach in analyzing Likert-scale psychological assessment data. Also, by comparing the results with those from other models of different data modalities, we examined the impact of dichotomization and treating ordinal responses as continuous on conditional dependence.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-26"},"PeriodicalIF":3.5,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145859036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-27DOI: 10.1080/00273171.2025.2601271
Satoshi Usami
Psychological researchers have shown an interest in disaggregating within-person variability from between-person differences. This paper provides a tutorial, simulation, and illustrative example of a new approach proposed by Usami (2023). This approach consists of a two-step procedure: within-person variability scores (WPVS) for each person, which are disaggregated from the stable traits of that person, are predicted using structural equation modeling, and causal parameters are then estimated via a potential outcome approach, such as by using structural nested mean models (SNMMs). This method has several advantages: (i) the flexible inclusion of curvilinear and interaction effects for WPVS as latent variables in treatment and outcome models, (ii) more accurate estimates of causal parameters for reciprocal relations can be obtained under certain conditions owing to them being doubly robust, even if unobserved time-varying confounders and model misspecifications exist, (iii) no models for (the distributions of) observed time-varying confounders are needed for estimation, and (iv) the risk of obtaining improper solutions is reduced. Estimation performances are investigated through large-scale simulations and it shows that the proposed approach works well in many conditions if longitudinal data with are available. An analytic example using data from the Tokyo Teen Cohort (TTC) study is also provided.
{"title":"A Two-Step Robust Estimation Approach for Inferring Within-Person Relations in Longitudinal Design: Tutorial and Simulations.","authors":"Satoshi Usami","doi":"10.1080/00273171.2025.2601271","DOIUrl":"https://doi.org/10.1080/00273171.2025.2601271","url":null,"abstract":"<p><p>Psychological researchers have shown an interest in disaggregating within-person variability from between-person differences. This paper provides a tutorial, simulation, and illustrative example of a new approach proposed by Usami (2023). This approach consists of a two-step procedure: <i>within-person variability scores</i> (WPVS) for each person, which are disaggregated from the stable traits of that person, are predicted using structural equation modeling, and causal parameters are then estimated <i>via</i> a potential outcome approach, such as by using structural nested mean models (SNMMs). This method has several advantages: (i) the flexible inclusion of curvilinear and interaction effects for WPVS as latent variables in treatment and outcome models, (ii) more accurate estimates of causal parameters for reciprocal relations can be obtained under certain conditions owing to them being doubly robust, even if unobserved time-varying confounders and model misspecifications exist, (iii) no models for (the distributions of) observed time-varying confounders are needed for estimation, and (iv) the risk of obtaining improper solutions is reduced. Estimation performances are investigated through large-scale simulations and it shows that the proposed approach works well in many conditions if longitudinal data with <math><mrow><mi>T</mi><mo>≥</mo><mn>4</mn></mrow></math> are available. An analytic example using data from the Tokyo Teen Cohort (TTC) study is also provided.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-22"},"PeriodicalIF":3.5,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145844588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1080/00273171.2025.2587379
Lingbo Tong, Zhiyong Zhang
Artificial neural networks (ANN) have attracted increasing attention in the field of psychology. With the availability of software programs, the wide application of ANN becomes possible. However, without a firm understanding of the basics of the ANN, issues can easily arise. This article presents a step-by-step guide for implementing a feed-forward neural network (FNN) on a psychological data set to illustrate the critical steps in building, estimating, and interpreting a neural network model. We start with a concrete example of a basic 3-layer FNN, illustrating the core concepts, the matrix representation, and the whole optimization process. By adjusting parameters and changing the model structure, we examine their effects on model performance. Then, we introduce accessible methods for interpreting model results and making inferences. Through the guide, we hope to help researchers avoid common problems in applying neural network models and machine learning methods in general.
{"title":"Neural Network Analysis of Psychological Data: A Step-by-Step Guide.","authors":"Lingbo Tong, Zhiyong Zhang","doi":"10.1080/00273171.2025.2587379","DOIUrl":"https://doi.org/10.1080/00273171.2025.2587379","url":null,"abstract":"<p><p>Artificial neural networks (ANN) have attracted increasing attention in the field of psychology. With the availability of software programs, the wide application of ANN becomes possible. However, without a firm understanding of the basics of the ANN, issues can easily arise. This article presents a step-by-step guide for implementing a feed-forward neural network (FNN) on a psychological data set to illustrate the critical steps in building, estimating, and interpreting a neural network model. We start with a concrete example of a basic 3-layer FNN, illustrating the core concepts, the matrix representation, and the whole optimization process. By adjusting parameters and changing the model structure, we examine their effects on model performance. Then, we introduce accessible methods for interpreting model results and making inferences. Through the guide, we hope to help researchers avoid common problems in applying neural network models and machine learning methods in general.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-21"},"PeriodicalIF":3.5,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145662669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1080/00273171.2025.2592361
Yuan Fang, Lijuan Wang
Intraindividual variability (IIV) characterizes the amplitude and temporal dependency of short-term fluctuations of a variable and is often used to predict outcomes in psychological studies. However, how to properly model IIV is understudied. In particular, intraindividual standard deviation (or variance), which quantifies the amplitude of fluctuation of a variable around its mean level, can be challenging to model directly in popular latent variable frameworks, such as dynamic structural equation modeling (DSEM). In this study, we introduced three novel modeling methods, including two two-step hybrid-Bayesian methods using DSEM and a one-step full Bayesian method, to model IIV as predictors. We conducted a simulation study to evaluate the performance of the three methods and compared their performance to that of the conventional regression approach under various data conditions. Simulation results showed that the hybrid-Bayesian approach with multiple draws (HBM) and the one-step full Bayesian (FB) approach performed well in recovering the parameters when sufficient sample size and time points were available. The data requirement of using FB was lower than HBM. However, the conventional approach and hybrid-Bayesian approach with a single draw failed to recover parameters, even with large samples. We provided a simulated data example with code online to illustrate the use of the methods.
{"title":"Novel Full-Bayesian and Hybrid-Bayesian Approaches for Modeling Intraindividual Variability.","authors":"Yuan Fang, Lijuan Wang","doi":"10.1080/00273171.2025.2592361","DOIUrl":"https://doi.org/10.1080/00273171.2025.2592361","url":null,"abstract":"<p><p>Intraindividual variability (IIV) characterizes the amplitude and temporal dependency of short-term fluctuations of a variable and is often used to predict outcomes in psychological studies. However, how to properly model IIV is understudied. In particular, intraindividual standard deviation (or variance), which quantifies the amplitude of fluctuation of a variable around its mean level, can be challenging to model directly in popular latent variable frameworks, such as dynamic structural equation modeling (DSEM). In this study, we introduced three novel modeling methods, including two two-step hybrid-Bayesian methods using DSEM and a one-step full Bayesian method, to model IIV as predictors. We conducted a simulation study to evaluate the performance of the three methods and compared their performance to that of the conventional regression approach under various data conditions. Simulation results showed that the hybrid-Bayesian approach with multiple draws (HBM) and the one-step full Bayesian (FB) approach performed well in recovering the parameters when sufficient sample size and time points were available. The data requirement of using FB was lower than HBM. However, the conventional approach and hybrid-Bayesian approach with a single draw failed to recover parameters, even with large samples. We provided a simulated data example with code online to illustrate the use of the methods.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-16"},"PeriodicalIF":3.5,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145656347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-22DOI: 10.1080/00273171.2025.2587286
Xiaoyue Xiong, Yanling Li, Michael D Hunter, Sy-Miin Chow
Trends represent systematic intra-individual variations that occur over slower time scales that, if unaccounted, are known to yield biases in estimation of momentary change patterns captured by time series models. The applicability of detrending methods has rarely been assessed in the context of multi-level longitudinal panel data, namely, nested data structures with relatively few measurements. This paper evaluated the efficacy of a series of two-stage detrending methods against a single-stage Bayesian approach in fitting multi-level nonlinear growth curve models with autoregressive residuals (ml-GAR) with random effects in both the growth and autoregressive processes. Monte Carlo simulation studies revealed that the single-stage Bayesian approach, in contrast to two-stage approaches, exhibited satisfactory properties with as few as five time points when the number of individuals was large (e.g., 500 individuals). It still outperformed alternative two-stage approaches when correlated random effects between the trend and autoregressive processes were misspecified as a diagonal random effect structure. Empirical results from the Early Childhood Longitudinal Study-Kindergarten Class (ECLS-K) data suggested substantial deviations in conclusions regarding children's reading ability using two-stage in comparison to single-stage approaches, thus highlighting the importance of simultaneous modeling of trends and intraindividual variability whenever feasible.
{"title":"Integrated Trend and Lagged Modeling of Multi-Subject, Multilevel, and Short Time Series.","authors":"Xiaoyue Xiong, Yanling Li, Michael D Hunter, Sy-Miin Chow","doi":"10.1080/00273171.2025.2587286","DOIUrl":"https://doi.org/10.1080/00273171.2025.2587286","url":null,"abstract":"<p><p>Trends represent systematic intra-individual variations that occur over slower time scales that, if unaccounted, are known to yield biases in estimation of momentary change patterns captured by time series models. The applicability of detrending methods has rarely been assessed in the context of multi-level longitudinal panel data, namely, nested data structures with relatively few measurements. This paper evaluated the efficacy of a series of two-stage detrending methods against a single-stage Bayesian approach in fitting <u>m</u>ulti-<u>l</u>evel nonlinear <u>g</u>rowth curve models with <u>a</u>uto<u>r</u>egressive residuals (ml-GAR) with random effects in both the growth and autoregressive processes. Monte Carlo simulation studies revealed that the single-stage Bayesian approach, in contrast to two-stage approaches, exhibited satisfactory properties with as few as five time points when the number of individuals was large (e.g., 500 individuals). It still outperformed alternative two-stage approaches when correlated random effects between the trend and autoregressive processes were misspecified as a diagonal random effect structure. Empirical results from the Early Childhood Longitudinal Study-Kindergarten Class (ECLS-K) data suggested substantial deviations in conclusions regarding children's reading ability using two-stage in comparison to single-stage approaches, thus highlighting the importance of simultaneous modeling of trends and intraindividual variability whenever feasible.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-24"},"PeriodicalIF":3.5,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1080/00273171.2025.2570250
Han Du, Fang Liu, Zhiyong Zhang, Craig Enders
Bayesian statistics have gained significant traction across various fields over the past few decades. Bayesian statistics textbooks often provide both code and the analytical forms of parameters for simple models. However, they often omit the process of deriving posterior distributions or limit it to basic univariate examples focused on the mean and variance. Additionally, these resources frequently assume a strong background in linear algebra and probability theory, which can present barriers for researchers without extensive mathematical training. This tutorial aims to fill that gap by offering a step-by-step guide to deriving posterior distributions. We aim to make concepts typically reserved for advanced statistics courses more accessible and practical. This tutorial will cover two models: the univariate normal model and the multilevel model. The concepts and properties demonstrated in the two examples can be generalized to other models and distributions.
{"title":"Demystifying Posterior Distributions: A Tutorial on Their Derivation.","authors":"Han Du, Fang Liu, Zhiyong Zhang, Craig Enders","doi":"10.1080/00273171.2025.2570250","DOIUrl":"https://doi.org/10.1080/00273171.2025.2570250","url":null,"abstract":"<p><p>Bayesian statistics have gained significant traction across various fields over the past few decades. Bayesian statistics textbooks often provide both code and the analytical forms of parameters for simple models. However, they often omit the process of deriving posterior distributions or limit it to basic univariate examples focused on the mean and variance. Additionally, these resources frequently assume a strong background in linear algebra and probability theory, which can present barriers for researchers without extensive mathematical training. This tutorial aims to fill that gap by offering a step-by-step guide to deriving posterior distributions. We aim to make concepts typically reserved for advanced statistics courses more accessible and practical. This tutorial will cover two models: the univariate normal model and the multilevel model. The concepts and properties demonstrated in the two examples can be generalized to other models and distributions.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-15"},"PeriodicalIF":3.5,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}