Pub Date : 2025-05-01Epub Date: 2025-01-15DOI: 10.1080/00273171.2024.2444955
James Soland, Veronica Cole, Stephen Tavares, Qilin Zhang
Interest in identifying latent growth profiles to support the psychological and social-emotional development of individuals has translated into the widespread use of growth mixture models (GMMs). In most cases, GMMs are based on scores from item responses collected using survey scales or other measures. Research already shows that GMMs can be sensitive to departures from ideal modeling conditions and that growth model results outside of GMMs are sensitive to decisions about how item responses are scored, but the impact of scoring decisions on GMMs has never been investigated. We start to close that gap in the literature with the current study. Through empirical and Monte Carlo studies, we show that GMM results-including convergence, class enumeration, and latent growth trajectories within class-are extremely sensitive to seemingly arcane measurement decisions. Further, our results make clear that, because GMM latent classes are not known a priori, measurement models used to produce scores for use in GMMs are, almost by definition, misspecified because they cannot account for group membership. Misspecification of the measurement model then, in turn, biases GMM results. Practical implications of these results are discussed. Our findings raise serious concerns that many results in the current GMM literature may be driven, in part or whole, by measurement artifacts rather than substantive differences in developmental trends.
{"title":"Evidence That Growth Mixture Model Results Are Highly Sensitive to Scoring Decisions.","authors":"James Soland, Veronica Cole, Stephen Tavares, Qilin Zhang","doi":"10.1080/00273171.2024.2444955","DOIUrl":"10.1080/00273171.2024.2444955","url":null,"abstract":"<p><p>Interest in identifying latent growth profiles to support the psychological and social-emotional development of individuals has translated into the widespread use of growth mixture models (GMMs). In most cases, GMMs are based on scores from item responses collected using survey scales or other measures. Research already shows that GMMs can be sensitive to departures from ideal modeling conditions and that growth model results outside of GMMs are sensitive to decisions about how item responses are scored, but the impact of scoring decisions on GMMs has never been investigated. We start to close that gap in the literature with the current study. Through empirical and Monte Carlo studies, we show that GMM results-including convergence, class enumeration, and latent growth trajectories within class-are extremely sensitive to seemingly arcane measurement decisions. Further, our results make clear that, because GMM latent classes are not known a priori, measurement models used to produce scores for use in GMMs are, almost by definition, misspecified because they cannot account for group membership. Misspecification of the measurement model then, in turn, biases GMM results. Practical implications of these results are discussed. Our findings raise serious concerns that many results in the current GMM literature may be driven, in part or whole, by measurement artifacts rather than substantive differences in developmental trends.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"487-508"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-01-15DOI: 10.1080/00273171.2024.2436413
Oisín Ryan, Jonas M B Haslbeck, Lourens J Waldorp
Time series analysis is increasingly popular across scientific domains. A key concept in time series analysis is stationarity, the stability of statistical properties of a time series. Understanding stationarity is crucial to addressing frequent issues in time series analysis such as the consequences of failing to model non-stationarity, how to determine the mechanisms generating non-stationarity, and consequently how to model those mechanisms (i.e., by differencing or detrending). However, many empirical researchers have a limited understanding of stationarity, which can lead to the use of incorrect research practices and misleading substantive conclusions. In this paper, we address this problem by answering these questions in an accessible way. To this end, we study how researchers can use detrending and differencing to model trends in time series analysis. We show via simulation the consequences of modeling trends inappropriately, and evaluate the performance of one popular approach to distinguish different trend types in empirical data. We present these results in an accessible way, providing an extensive introduction to key concepts in time series analysis, illustrated throughout with simple examples. Finally, we discuss a number of take-home messages and extensions to standard approaches, which directly address more complex time-series analysis problems encountered by empirical researchers.
{"title":"Non-Stationarity in Time-Series Analysis: Modeling Stochastic and Deterministic Trends.","authors":"Oisín Ryan, Jonas M B Haslbeck, Lourens J Waldorp","doi":"10.1080/00273171.2024.2436413","DOIUrl":"10.1080/00273171.2024.2436413","url":null,"abstract":"<p><p>Time series analysis is increasingly popular across scientific domains. A key concept in time series analysis is stationarity, the stability of statistical properties of a time series. Understanding stationarity is crucial to addressing frequent issues in time series analysis such as the consequences of failing to model non-stationarity, how to determine the mechanisms generating non-stationarity, and consequently how to model those mechanisms (i.e., by differencing or detrending). However, many empirical researchers have a limited understanding of stationarity, which can lead to the use of incorrect research practices and misleading substantive conclusions. In this paper, we address this problem by answering these questions in an accessible way. To this end, we study how researchers can use detrending and differencing to model trends in time series analysis. We show <i>via</i> simulation the consequences of modeling trends inappropriately, and evaluate the performance of one popular approach to distinguish different trend types in empirical data. We present these results in an accessible way, providing an extensive introduction to key concepts in time series analysis, illustrated throughout with simple examples. Finally, we discuss a number of take-home messages and extensions to standard approaches, which directly address more complex time-series analysis problems encountered by empirical researchers.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"556-588"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2024-12-15DOI: 10.1080/00273171.2024.2436420
Anja F Ernst, Eva Ceulemans, Laura F Bringmann, Janne Adolf
Nowadays research into affect frequently employs intensive longitudinal data to assess fluctuations in daily emotional experiences. The resulting data are often analyzed with moderated autoregressive models to capture the influences of contextual events on the emotion dynamics. The presence of noise (e.g., measurement error) in the measures of the contextual events, however, is commonly ignored in these models. Disregarding noise in these covariates when it is present may result in biased parameter estimates and wrong conclusions drawn about the underlying emotion dynamics. In a simulation study we evaluate the estimation accuracy, assessed in terms of bias and variance, of different moderated autoregressive models in the presence of noise in the covariate. We show that estimation accuracy decreases when the amount of noise in the covariate increases. We also show that this bias is magnified by a larger effect of the covariate, a slower switching frequency of the covariate, a discrete rather than a continuous covariate, and constant rather than occasional noise in the covariate. We also show that the bias that results from a noisy covariate does not decrease when the number of observations increases. We end with a few recommendations for applying moderated autoregressive models based on our simulation.
{"title":"Evaluating Contextual Models for Intensive Longitudinal Data in the Presence of Noise.","authors":"Anja F Ernst, Eva Ceulemans, Laura F Bringmann, Janne Adolf","doi":"10.1080/00273171.2024.2436420","DOIUrl":"10.1080/00273171.2024.2436420","url":null,"abstract":"<p><p>Nowadays research into affect frequently employs intensive longitudinal data to assess fluctuations in daily emotional experiences. The resulting data are often analyzed with moderated autoregressive models to capture the influences of contextual events on the emotion dynamics. The presence of noise (e.g., measurement error) in the measures of the contextual events, however, is commonly ignored in these models. Disregarding noise in these covariates when it is present may result in biased parameter estimates and wrong conclusions drawn about the underlying emotion dynamics. In a simulation study we evaluate the estimation accuracy, assessed in terms of bias and variance, of different moderated autoregressive models in the presence of noise in the covariate. We show that estimation accuracy decreases when the amount of noise in the covariate increases. We also show that this bias is magnified by a larger effect of the covariate, a slower switching frequency of the covariate, a discrete rather than a continuous covariate, and constant rather than occasional noise in the covariate. We also show that the bias that results from a noisy covariate does not decrease when the number of observations increases. We end with a few recommendations for applying moderated autoregressive models based on our simulation.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"423-443"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2024-12-27DOI: 10.1080/00273171.2024.2436418
Lan Luo, Kathleen M Gates, Kenneth A Bollen
We present the R package MIIVefa, designed to implement the MIIV-EFA algorithm. This algorithm explores and identifies the underlying factor structure within a set of variables. The resulting model is not a typical exploratory factor analysis (EFA) model because some loadings are fixed to zero and it allows users to include hypothesized correlated errors such as might occur with longitudinal data. As such, it resembles a confirmatory factor analysis (CFA) model. But, unlike CFA, the MIIV-EFA algorithm determines the number of factors and the items that load on these factors directly from the data. We provide both simulation and empirical examples to illustrate the application of MIIVefa and discuss its benefits and limitations.
{"title":"MIIVefa: An R Package for a New Type of Exploratory Factor Anaylysis Using Model-Implied Instrumental Variables.","authors":"Lan Luo, Kathleen M Gates, Kenneth A Bollen","doi":"10.1080/00273171.2024.2436418","DOIUrl":"10.1080/00273171.2024.2436418","url":null,"abstract":"<p><p>We present the R package MIIVefa, designed to implement the MIIV-EFA algorithm. This algorithm explores and identifies the underlying factor structure within a set of variables. The resulting model is not a typical exploratory factor analysis (EFA) model because some loadings are fixed to zero and it allows users to include hypothesized correlated errors such as might occur with longitudinal data. As such, it resembles a confirmatory factor analysis (CFA) model. But, unlike CFA, the MIIV-EFA algorithm determines the number of factors and the items that load on these factors directly from the data. We provide both simulation and empirical examples to illustrate the application of MIIVefa and discuss its benefits and limitations.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"589-597"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12189262/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-01-22DOI: 10.1080/00273171.2025.2450648
K B S Huth, B DeLong, L Waldorp, M Marsman, M Rhemtulla
Psychometric networks can be estimated using nodewise regression to estimate edge weights when the joint distribution is analytically difficult to derive or the estimation is too computationally intensive. The nodewise approach runs generalized linear models with each node as the outcome. Two regression coefficients are obtained for each link, which need to be aggregated to obtain the edge weight (i.e., the conditional association). The nodewise approach has been shown to reveal the true graph structure. However, for continuous variables, the regression coefficients are scaled differently than the partial correlations, and therefore the nodewise approach may lead to different edge weights. Here, the aggregation of the two regression coefficients is crucial in obtaining the true partial correlation. We show that when the correlations of the two predictors with the control variables are different, averaging the regression coefficients leads to an asymptotically biased estimator of the partial correlation. This is likely to occur when a variable has a high correlation with other nodes in the network (e.g., variables in the same domain) and a lower correlation with another node (e.g., variables in a different domain). We discuss two different ways of aggregating the regression weights, which can obtain the true partial correlation: first, multiplying the weights and taking their square root, and second, rescaling the regression weight by the residual variances. The two latter estimators can recover the true network structure and edge weights.
{"title":"Nodewise Parameter Aggregation for Psychometric Networks.","authors":"K B S Huth, B DeLong, L Waldorp, M Marsman, M Rhemtulla","doi":"10.1080/00273171.2025.2450648","DOIUrl":"10.1080/00273171.2025.2450648","url":null,"abstract":"<p><p>Psychometric networks can be estimated using nodewise regression to estimate edge weights when the joint distribution is analytically difficult to derive or the estimation is too computationally intensive. The nodewise approach runs generalized linear models with each node as the outcome. Two regression coefficients are obtained for each link, which need to be aggregated to obtain the edge weight (i.e., the conditional association). The nodewise approach has been shown to reveal the true graph structure. However, for continuous variables, the regression coefficients are scaled differently than the partial correlations, and therefore the nodewise approach may lead to different edge weights. Here, the aggregation of the two regression coefficients is crucial in obtaining the true partial correlation. We show that when the correlations of the two predictors with the control variables are different, averaging the regression coefficients leads to an asymptotically biased estimator of the partial correlation. This is likely to occur when a variable has a high correlation with other nodes in the network (e.g., variables in the same domain) and a lower correlation with another node (e.g., variables in a different domain). We discuss two different ways of aggregating the regression weights, which can obtain the true partial correlation: first, multiplying the weights and taking their square root, and second, rescaling the regression weight by the residual variances. The two latter estimators can recover the true network structure and edge weights.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"509-517"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-02-12DOI: 10.1080/00273171.2025.2453454
Matthew J Madison, Minjeong Jeon, Michael Cotterell, Sergio Haab, Selay Zor
Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or non-proficiency of specified latent attributes. Longitudinal DCMs have recently been developed as psychometric models for modeling changes in examinee proficiency statuses over time. Currently, software programs for estimating longitudinal DCMs are limited in functionality and generality, expensive, or cumbersome for applied researchers. This manuscript describes and demonstrates a newly developed R package for estimating a general longitudinal DCM, the transition diagnostic classification model.
{"title":"TDCM: An R Package for Estimating Longitudinal Diagnostic Classification Models.","authors":"Matthew J Madison, Minjeong Jeon, Michael Cotterell, Sergio Haab, Selay Zor","doi":"10.1080/00273171.2025.2453454","DOIUrl":"10.1080/00273171.2025.2453454","url":null,"abstract":"<p><p>Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or non-proficiency of specified latent attributes. Longitudinal DCMs have recently been developed as psychometric models for modeling changes in examinee proficiency statuses over time. Currently, software programs for estimating longitudinal DCMs are limited in functionality and generality, expensive, or cumbersome for applied researchers. This manuscript describes and demonstrates a newly developed R package for estimating a general longitudinal DCM, the transition diagnostic classification model.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"518-527"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143400729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-02-14DOI: 10.1080/00273171.2025.2455497
Austin Wyman, Zhiyong Zhang
Automated detection of facial emotions has been an interesting topic for multiple decades in social and behavioral research but is only possible very recently. In this tutorial, we review three popular artificial intelligence based emotion detection programs that are accessible to R programmers: Google Cloud Vision, Amazon Rekognition, and Py-Feat. We present their advantages, disadvantages, and provide sample code so that researchers can immediately begin designing, collecting, and analyzing emotion data. Furthermore, we provide an introductory level explanation of the machine learning, deep learning, and computer vision algorithms that underlie most emotion detection programs in order to improve literacy of explainable artificial intelligence in the social and behavioral science literature.
{"title":"A Tutorial on the Use of Artificial Intelligence Tools for Facial Emotion Recognition in R.","authors":"Austin Wyman, Zhiyong Zhang","doi":"10.1080/00273171.2025.2455497","DOIUrl":"10.1080/00273171.2025.2455497","url":null,"abstract":"<p><p>Automated detection of facial emotions has been an interesting topic for multiple decades in social and behavioral research but is only possible very recently. In this tutorial, we review three popular artificial intelligence based emotion detection programs that are accessible to R programmers: Google Cloud Vision, Amazon Rekognition, and Py-Feat. We present their advantages, disadvantages, and provide sample code so that researchers can immediately begin designing, collecting, and analyzing emotion data. Furthermore, we provide an introductory level explanation of the machine learning, deep learning, and computer vision algorithms that underlie most emotion detection programs in order to improve literacy of explainable artificial intelligence in the social and behavioral science literature.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"641-655"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143416123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-01-22DOI: 10.1080/00273171.2024.2444943
Mijke Rhemtulla, Victoria Savalei
In this tutorial, we clarify the distinction between estimated factor scores, which are weighted composites of observed variables, and true factor scores, which are unobservable values of the underlying latent variable. Using an analogy with linear regression, we show how predicted values in linear regression share the properties of the most common type of factor score estimates, regression factor scores, computed from single-indicator and multiple indicator latent variable models. Using simulated data from 1- and 2-factor models, we also show how the amount of measurement error affects the reliability of regression factor scores, and compare the performance of regression factor scores with that of unweighted sum scores.
{"title":"Estimated Factor Scores Are Not True Factor Scores.","authors":"Mijke Rhemtulla, Victoria Savalei","doi":"10.1080/00273171.2024.2444943","DOIUrl":"10.1080/00273171.2024.2444943","url":null,"abstract":"<p><p>In this tutorial, we clarify the distinction between estimated factor scores, which are weighted composites of observed variables, and true factor scores, which are unobservable values of the underlying latent variable. Using an analogy with linear regression, we show how predicted values in linear regression share the properties of the most common type of factor score estimates, regression factor scores, computed from single-indicator and multiple indicator latent variable models. Using simulated data from 1- and 2-factor models, we also show how the amount of measurement error affects the reliability of regression factor scores, and compare the performance of regression factor scores with that of unweighted sum scores.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"598-619"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-02-03DOI: 10.1080/00273171.2024.2444940
Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark
We propose interrater reliability coefficients for observational interdependent social network data, which are dyadic data from a network of interacting subjects that are observed by external raters. Using the social relations model, dyadic scores of subjects' behaviors during these interactions can be decomposed into actor, partner, and relationship effects. These effects constitute different facets of theoretical interest about which researchers formulate research questions. Based on generalizability theory, we extended the social relations model with rater effects, resulting in a model that decomposes the variance of dyadic observational data into effects of actors, partners, relationships, raters, and their statistical interactions. We used the variances of these effects to define intraclass correlation coefficients (ICCs) that indicate the extent the actor, partner, and relationship effects can be generalized across external raters. We proposed Markov chain Monte Carlo estimation of a Bayesian hierarchical linear model to estimate the ICCs, and tested their bias and coverage in a simulation study. The method is illustrated using data on social mimicry.
{"title":"Interrater Reliability for Interdependent Social Network Data: A Generalizability Theory Approach.","authors":"Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark","doi":"10.1080/00273171.2024.2444940","DOIUrl":"10.1080/00273171.2024.2444940","url":null,"abstract":"<p><p>We propose interrater reliability coefficients for observational interdependent social network data, which are dyadic data from a network of interacting subjects that are observed by external raters. Using the social relations model, dyadic scores of subjects' behaviors during these interactions can be decomposed into actor, partner, and relationship effects. These effects constitute different facets of theoretical interest about which researchers formulate research questions. Based on generalizability theory, we extended the social relations model with rater effects, resulting in a model that decomposes the variance of dyadic observational data into effects of actors, partners, relationships, raters, and their statistical interactions. We used the variances of these effects to define intraclass correlation coefficients (ICCs) that indicate the extent the actor, partner, and relationship effects can be generalized across external raters. We proposed Markov chain Monte Carlo estimation of a Bayesian hierarchical linear model to estimate the ICCs, and tested their bias and coverage in a simulation study. The method is illustrated using data on social mimicry.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"444-459"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-03-23DOI: 10.1080/00273171.2025.2454901
Hudson Golino, John Nesselroade, Alexander P Christensen
In the last half of the twentieth century, psychology and neuroscience have experienced a renewed interest in intraindividual variation. To date, there are few quantitative methods to evaluate whether a population (between-person) structure is likely to hold for individual people, often referred to as ergodicity. We introduce a new network information theoretic metric, the ergodicity information index (EII), that quantifies the amount of information lost by representing all individuals with a between-person structure. A Monte Carlo simulation demonstrated that EII can effectively delineate between ergodic and nonergodic systems. A bootstrap test is derived to statistically determine whether the empirical data is likely generated from an ergodic process. When a process is identified as nonergodic, then it's possible that a mixture of groups exist. To evaluate whether groups exist, we develop an information theoretic clustering method to detect groups. Finally, two empirical examples are presented using intensive longitudinal data from personality and neuroscience domains. Both datasets were found to be nonergodic, and meaningful groupings were identified in each dataset. Subsequent analysis showed that some of these groups are ergodic, meaning that the individuals can be represented with a single population structure without significant loss of information. Notably, in the neuroscience data, we could correctly identify two clusters of individuals (young vs. older adults) measured by a pattern separation task that were related to hippocampal connectivity to the default mode network.
{"title":"Toward a Psychology of Individuals: The Ergodicity Information Index and a Bottom-up Approach for Finding Generalizations.","authors":"Hudson Golino, John Nesselroade, Alexander P Christensen","doi":"10.1080/00273171.2025.2454901","DOIUrl":"10.1080/00273171.2025.2454901","url":null,"abstract":"<p><p>In the last half of the twentieth century, psychology and neuroscience have experienced a renewed interest in intraindividual variation. To date, there are few quantitative methods to evaluate whether a population (between-person) structure is likely to hold for individual people, often referred to as ergodicity. We introduce a new network information theoretic metric, the ergodicity information index (EII), that quantifies the amount of information lost by representing all individuals with a between-person structure. A Monte Carlo simulation demonstrated that EII can effectively delineate between ergodic and nonergodic systems. A bootstrap test is derived to statistically determine whether the empirical data is likely generated from an ergodic process. When a process is identified as nonergodic, then it's possible that a mixture of groups exist. To evaluate whether groups exist, we develop an information theoretic clustering method to detect groups. Finally, two empirical examples are presented using intensive longitudinal data from personality and neuroscience domains. Both datasets were found to be nonergodic, and meaningful groupings were identified in each dataset. Subsequent analysis showed that some of these groups are ergodic, meaning that the individuals can be represented with a single population structure without significant loss of information. Notably, in the neuroscience data, we could correctly identify two clusters of individuals (young vs. older adults) measured by a pattern separation task that were related to hippocampal connectivity to the default mode network.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"528-555"},"PeriodicalIF":5.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}