Pub Date : 2024-02-01Epub Date: 2022-11-03DOI: 10.1037/met0000528
Jonas M B Haslbeck, Riet van Bork
Exploratory factor analysis (EFA) is one of the most popular statistical models in psychological science. A key problem in EFA is to estimate the number of factors. In this article, we present a new method for estimating the number of factors based on minimizing the out-of-sample prediction error of candidate factor models. We show in an extensive simulation study that our method slightly outperforms existing methods, including parallel analysis, Bayesian information criterion (BIC), Akaike information criterion (AIC), root mean squared error of approximation (RMSEA), and exploratory graph analysis. In addition, we show that, among the best performing methods, our method is the one that is most robust across different specifications of the true factor model. We provide an implementation of our method in the R-package fspe. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
探索性因子分析(EFA)是心理科学中最流行的统计模型之一。EFA 的一个关键问题是估计因子的数量。在本文中,我们提出了一种基于最小化候选因子模型的样本外预测误差来估计因子数量的新方法。我们通过大量的模拟研究表明,我们的方法略优于现有的方法,包括平行分析法、贝叶斯信息准则(BIC)、阿凯克信息准则(AIC)、近似均方根误差(RMSEA)和探索性图分析法。此外,我们还证明,在性能最好的方法中,我们的方法在不同的真实因子模型规格下都是最稳健的。我们在 R 软件包 fspe 中提供了我们方法的实现。(PsycInfo Database Record (c) 2024 APA, 版权所有)。
{"title":"Estimating the number of factors in exploratory factor analysis via out-of-sample prediction errors.","authors":"Jonas M B Haslbeck, Riet van Bork","doi":"10.1037/met0000528","DOIUrl":"10.1037/met0000528","url":null,"abstract":"<p><p>Exploratory factor analysis (EFA) is one of the most popular statistical models in psychological science. A key problem in EFA is to estimate the number of factors. In this article, we present a new method for estimating the number of factors based on minimizing the out-of-sample prediction error of candidate factor models. We show in an extensive simulation study that our method slightly outperforms existing methods, including parallel analysis, Bayesian information criterion (BIC), Akaike information criterion (AIC), root mean squared error of approximation (RMSEA), and exploratory graph analysis. In addition, we show that, among the best performing methods, our method is the one that is most robust across different specifications of the true factor model. We provide an implementation of our method in the R-package <i>fspe</i>. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"48-64"},"PeriodicalIF":7.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10590357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2022-05-12DOI: 10.1037/met0000484
Loes Crielaard, Jeroen F Uleman, Bas D L Châtel, Sacha Epskamp, Peter M A Sloot, Rick Quax
Complexity science and systems thinking are increasingly recognized as relevant paradigms for studying systems where biology, psychology, and socioenvironmental factors interact. The application of systems thinking, however, often stops at developing a conceptual model that visualizes the mapping of causal links within a system, e.g., a causal loop diagram (CLD). While this is an important contribution in itself, it is imperative to subsequently formulate a computable version of a CLD in order to interpret the dynamics of the modeled system and simulate "what if" scenarios. We propose to realize this by deriving knowledge from experts' mental models in biopsychosocial domains. This article first describes the steps required for capturing expert knowledge in a CLD such that it may result in a computational system dynamics model (SDM). For this purpose, we introduce several annotations to the CLD that facilitate this intended conversion. This annotated CLD (aCLD) includes sources of evidence, intermediary variables, functional forms of causal links, and the distinction between uncertain and known-to-be-absent causal links. We propose an algorithm for developing an aCLD that includes these annotations. We then describe how to formulate an SDM based on the aCLD. The described steps for this conversion help identify, quantify, and potentially reduce sources of uncertainty and obtain confidence in the results of the SDM's simulations. We utilize a running example that illustrates each step of this conversion process. The systematic approach described in this article facilitates and advances the application of computational science methods to biopsychosocial systems. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"Refining the causal loop diagram: A tutorial for maximizing the contribution of domain expertise in computational system dynamics modeling.","authors":"Loes Crielaard, Jeroen F Uleman, Bas D L Châtel, Sacha Epskamp, Peter M A Sloot, Rick Quax","doi":"10.1037/met0000484","DOIUrl":"10.1037/met0000484","url":null,"abstract":"<p><p>Complexity science and systems thinking are increasingly recognized as relevant paradigms for studying systems where biology, psychology, and socioenvironmental factors interact. The application of systems thinking, however, often stops at developing a conceptual model that visualizes the mapping of causal links within a system, e.g., a causal loop diagram (CLD). While this is an important contribution in itself, it is imperative to subsequently formulate a computable version of a CLD in order to interpret the dynamics of the modeled system and simulate \"what if\" scenarios. We propose to realize this by deriving knowledge from experts' mental models in biopsychosocial domains. This article first describes the steps required for capturing expert knowledge in a CLD such that it may result in a computational system dynamics model (SDM). For this purpose, we introduce several annotations to the CLD that facilitate this intended conversion. This annotated CLD (aCLD) includes sources of evidence, intermediary variables, functional forms of causal links, and the distinction between uncertain and known-to-be-absent causal links. We propose an algorithm for developing an aCLD that includes these annotations. We then describe how to formulate an SDM based on the aCLD. The described steps for this conversion help identify, quantify, and potentially reduce sources of uncertainty and obtain confidence in the results of the SDM's simulations. We utilize a running example that illustrates each step of this conversion process. The systematic approach described in this article facilitates and advances the application of computational science methods to biopsychosocial systems. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"169-201"},"PeriodicalIF":7.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10011305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01Epub Date: 2023-08-10DOI: 10.1037/met0000551
Anja F Ernst, Marieke E Timmerman, Feng Ji, Bertus F Jeronimus, Casper J Albers
With the rising popularity of intensive longitudinal research, the modeling techniques for such data are increasingly focused on individual differences. Here we present mixture multilevel vector-autoregressive modeling, which extends multilevel vector-autoregressive modeling by including a mixture, to identify individuals with similar traits and dynamic processes. This exploratory model identifies mixture components, where each component refers to individuals with similarities in means (expressing traits), autoregressions, and cross-regressions (expressing dynamics), while allowing for some interindividual differences in these attributes. Key issues in modeling are discussed, where the issue of centering predictors is examined in a small simulation study. The proposed model is validated in a simulation study and used to analyze the affective data from the COGITO study. These data consist of samples for two different age groups of over 100 individuals each who were measured for about 100 days. We demonstrate the advantage of exploratory identifying mixture components by analyzing these heterogeneous samples jointly. The model identifies three distinct components, and we provide an interpretation for each component motivated by developmental psychology. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"Mixture multilevel vector-autoregressive modeling.","authors":"Anja F Ernst, Marieke E Timmerman, Feng Ji, Bertus F Jeronimus, Casper J Albers","doi":"10.1037/met0000551","DOIUrl":"10.1037/met0000551","url":null,"abstract":"<p><p>With the rising popularity of intensive longitudinal research, the modeling techniques for such data are increasingly focused on individual differences. Here we present mixture multilevel vector-autoregressive modeling, which extends multilevel vector-autoregressive modeling by including a mixture, to identify individuals with similar traits and dynamic processes. This exploratory model identifies mixture components, where each component refers to individuals with similarities in means (expressing traits), autoregressions, and cross-regressions (expressing dynamics), while allowing for some interindividual differences in these attributes. Key issues in modeling are discussed, where the issue of centering predictors is examined in a small simulation study. The proposed model is validated in a simulation study and used to analyze the affective data from the COGITO study. These data consist of samples for two different age groups of over 100 individuals each who were measured for about 100 days. We demonstrate the advantage of exploratory identifying mixture components by analyzing these heterogeneous samples jointly. The model identifies three distinct components, and we provide an interpretation for each component motivated by developmental psychology. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"137-154"},"PeriodicalIF":7.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9958393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2023-01-12DOI: 10.1037/met0000550
Jannik H Orzek, Manuel C Voelkle
Regularized continuous time structural equation models are proposed to address two recent challenges in longitudinal research: Unequally spaced measurement occasions and high model complexity. Unequally spaced measurement occasions are part of most longitudinal studies, sometimes intentionally (e.g., in experience sampling methods) sometimes unintentionally (e.g., due to missing data). Yet, prominent dynamic models, such as the autoregressive cross-lagged model, assume equally spaced measurement occasions. If this assumption is violated parameter estimates can be biased, potentially leading to false conclusions. Continuous time structural equation models (CTSEM) resolve this problem by taking the exact time point of a measurement into account. This allows for any arbitrary measurement scheme. We combine CTSEM with LASSO and adaptive LASSO regularization. Such regularization techniques are especially promising for the increasingly complex models in psychological research, the most prominent example being network models with often dozens or hundreds of parameters. Here, LASSO regularization can reduce the risk of overfitting and simplify the model interpretation. In this article we highlight unique challenges in regularizing continuous time dynamic models, such as standardization or the optimization of the objective function, and offer different solutions. Our approach is implemented in the R (R Core Team, 2022) package regCtsem. We demonstrate the use of regCtsem in a simulation study, showing that the proposed regularization improves the parameter estimates, especially in small samples. The approach correctly eliminates true-zero parameters while retaining true-nonzero parameters. We present two empirical examples and end with a discussion on current limitations and future research directions. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"Regularized continuous time structural equation models: A network perspective.","authors":"Jannik H Orzek, Manuel C Voelkle","doi":"10.1037/met0000550","DOIUrl":"10.1037/met0000550","url":null,"abstract":"<p><p>Regularized continuous time structural equation models are proposed to address two recent challenges in longitudinal research: Unequally spaced measurement occasions and high model complexity. Unequally spaced measurement occasions are part of most longitudinal studies, sometimes intentionally (e.g., in experience sampling methods) sometimes unintentionally (e.g., due to missing data). Yet, prominent dynamic models, such as the autoregressive cross-lagged model, assume equally spaced measurement occasions. If this assumption is violated parameter estimates can be biased, potentially leading to false conclusions. Continuous time structural equation models (CTSEM) resolve this problem by taking the exact time point of a measurement into account. This allows for any arbitrary measurement scheme. We combine CTSEM with LASSO and adaptive LASSO regularization. Such regularization techniques are especially promising for the increasingly complex models in psychological research, the most prominent example being network models with often dozens or hundreds of parameters. Here, LASSO regularization can reduce the risk of overfitting and simplify the model interpretation. In this article we highlight unique challenges in regularizing continuous time dynamic models, such as standardization or the optimization of the objective function, and offer different solutions. Our approach is implemented in the R (R Core Team, 2022) package regCtsem. We demonstrate the use of regCtsem in a simulation study, showing that the proposed regularization improves the parameter estimates, especially in small samples. The approach correctly eliminates true-zero parameters while retaining true-nonzero parameters. We present two empirical examples and end with a discussion on current limitations and future research directions. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1286-1320"},"PeriodicalIF":7.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10525388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2023-05-01DOI: 10.1037/met0000542
Oscar Kjell, Salvatore Giorgi, H Andrew Schwartz
The language that individuals use for expressing themselves contains rich psychological information. Recent significant advances in Natural Language Processing (NLP) and Deep Learning (DL), namely transformers, have resulted in large performance gains in tasks related to understanding natural language. However, these state-of-the-art methods have not yet been made easily accessible for psychology researchers, nor designed to be optimal for human-level analyses. This tutorial introduces text (https://r-text.org/), a new R-package for analyzing and visualizing human language using transformers, the latest techniques from NLP and DL. The text-package is both a modular solution for accessing state-of-the-art language models and an end-to-end solution catered for human-level analyses. Hence, text provides user-friendly functions tailored to test hypotheses in social sciences for both relatively small and large data sets. The tutorial describes methods for analyzing text, providing functions with reliable defaults that can be used off-the-shelf as well as providing a framework for the advanced users to build on for novel pipelines. The reader learns about three core methods: (1) textEmbed(): to transform text to modern transformer-based word embeddings; (2) textTrain() and textPredict(): to train predictive models with embeddings as input, and use the models to predict from; (3) textSimilarity() and textDistance(): to compute semantic similarity/distance scores between texts. The reader also learns about two extended methods: (1) textProjection()/textProjectionPlot() and (2) textCentrality()/textCentralityPlot(): to examine and visualize text within the embedding space. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"The text-package: An R-package for analyzing and visualizing human language using natural language processing and transformers.","authors":"Oscar Kjell, Salvatore Giorgi, H Andrew Schwartz","doi":"10.1037/met0000542","DOIUrl":"10.1037/met0000542","url":null,"abstract":"<p><p>The language that individuals use for expressing themselves contains rich psychological information. Recent significant advances in Natural Language Processing (NLP) and Deep Learning (DL), namely transformers, have resulted in large performance gains in tasks related to understanding natural language. However, these state-of-the-art methods have not yet been made easily accessible for psychology researchers, nor designed to be optimal for human-level analyses. This tutorial introduces text (https://r-text.org/), a new R-package for analyzing and visualizing human language using transformers, the latest techniques from NLP and DL. The text-package is both a modular solution for accessing state-of-the-art language models and an end-to-end solution catered for human-level analyses. Hence, text provides user-friendly functions tailored to test hypotheses in social sciences for both relatively small and large data sets. The tutorial describes methods for analyzing text, providing functions with reliable defaults that can be used off-the-shelf as well as providing a framework for the advanced users to build on for novel pipelines. The reader learns about three core methods: (1) textEmbed(): to transform text to modern transformer-based word embeddings; (2) textTrain() and textPredict(): to train predictive models with embeddings as input, and use the models to predict from; (3) textSimilarity() and textDistance(): to compute semantic similarity/distance scores between texts. The reader also learns about two extended methods: (1) textProjection()/textProjectionPlot() and (2) textCentrality()/textCentralityPlot(): to examine and visualize text within the embedding space. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1478-1498"},"PeriodicalIF":7.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9374010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2022-01-13DOI: 10.1037/met0000462
Mark Wilson, Shruti Bathia, Linda Morell, Perman Gochyyev, Bon W Koo, Rebecca Smith
The Likert item response format for items is almost ubiquitous in the social sciences and has particular virtues regarding the relative simplicity of item-generation and the efficiency for coding responses. However, in this article, we critique this very common item format, focusing on its affordance for interpretation in terms of internal structure validity evidence. We suggest an alternative, the Guttman response format, which we see as providing a better approach for gathering and interpreting internal structure validity evidence. Using a specific survey-based example, we illustrate how items in this alternative format can be developed, exemplify how such items operate, and explore some comparisons between the results from using the two formats. In conclusion, we recommend usage of the Guttman response format for improving the interpretability of the resulting outcomes. Finally, we also note how this approach may be used in tandem with items that use the Likert response format to help balance efficiency with interpretability. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"Seeking a better balance between efficiency and interpretability: Comparing the likert response format with the Guttman response format.","authors":"Mark Wilson, Shruti Bathia, Linda Morell, Perman Gochyyev, Bon W Koo, Rebecca Smith","doi":"10.1037/met0000462","DOIUrl":"10.1037/met0000462","url":null,"abstract":"<p><p>The Likert item response format for items is almost ubiquitous in the social sciences and has particular virtues regarding the relative simplicity of item-generation and the efficiency for coding responses. However, in this article, we critique this very common item format, focusing on its affordance for interpretation in terms of internal structure validity evidence. We suggest an alternative, the Guttman response format, which we see as providing a better approach for gathering and interpreting internal structure validity evidence. Using a specific survey-based example, we illustrate how items in this alternative format can be developed, exemplify how such items operate, and explore some comparisons between the results from using the two formats. In conclusion, we recommend usage of the Guttman response format for improving the interpretability of the resulting outcomes. Finally, we also note how this approach may be used in tandem with items that use the Likert response format to help balance efficiency with interpretability. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1358-1373"},"PeriodicalIF":7.6,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9854400/pdf/nihms-1858997.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9771300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2021-12-16DOI: 10.1037/met0000447
Evelien Schat, Francis Tuerlinckx, Arnout C Smit, Bart De Ketelaere, Eva Ceulemans
Detecting early warning signals of developing mood disorders in continuously collected affective experience sampling (ESM) data would pave the way for timely intervention and prevention of a mood disorder from occurring or to mitigate its severity. However, there is an urgent need for online statistical methods tailored to the specifics of ESM data. Statistical process control (SPC) procedures, originally developed for monitoring industrial processes, seem promising tools. However, affective ESM data violate major assumptions of the SPC procedures: The observations are not independent across time, often skewed distributed, and characterized by missingness. Therefore, evaluating SPC performance on simulated data with typical ESM features is a crucial step. In this article, we didactically introduce six univariate and multivariate SPC procedures: Shewhart, Hotelling's T², EWMA, MEWMA, CUSUM and MCUSUM. Their behavior is illustrated on publicly available affective ESM data of a patient that relapsed into depression. To deal with the missingness, autocorrelation, and skewness in these data, we compute and monitor the day averages rather than the individual measurement occasions. Moreover, we apply all procedures on simulated data with typical affective ESM features, and evaluate their performance at detecting small to moderate mean changes. The simulation results indicate that the (M)EWMA and (M)CUSUM procedures clearly outperform the Shewhart and Hotelling's T² procedures and support using day averages rather than the original data. Based on these results, we provide some recommendations for optimizing SPC performance when monitoring ESM data as well as a wide range of directions for future research. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"Detecting mean changes in experience sampling data in real time: A comparison of univariate and multivariate statistical process control methods.","authors":"Evelien Schat, Francis Tuerlinckx, Arnout C Smit, Bart De Ketelaere, Eva Ceulemans","doi":"10.1037/met0000447","DOIUrl":"10.1037/met0000447","url":null,"abstract":"<p><p>Detecting early warning signals of developing mood disorders in continuously collected affective experience sampling (ESM) data would pave the way for timely intervention and prevention of a mood disorder from occurring or to mitigate its severity. However, there is an urgent need for online statistical methods tailored to the specifics of ESM data. Statistical process control (SPC) procedures, originally developed for monitoring industrial processes, seem promising tools. However, affective ESM data violate major assumptions of the SPC procedures: The observations are not independent across time, often skewed distributed, and characterized by missingness. Therefore, evaluating SPC performance on simulated data with typical ESM features is a crucial step. In this article, we didactically introduce six univariate and multivariate SPC procedures: Shewhart, Hotelling's <i>T</i>², EWMA, MEWMA, CUSUM and MCUSUM. Their behavior is illustrated on publicly available affective ESM data of a patient that relapsed into depression. To deal with the missingness, autocorrelation, and skewness in these data, we compute and monitor the day averages rather than the individual measurement occasions. Moreover, we apply all procedures on simulated data with typical affective ESM features, and evaluate their performance at detecting small to moderate mean changes. The simulation results indicate that the (M)EWMA and (M)CUSUM procedures clearly outperform the Shewhart and Hotelling's <i>T</i>² procedures and support using day averages rather than the original data. Based on these results, we provide some recommendations for optimizing SPC performance when monitoring ESM data as well as a wide range of directions for future research. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1335-1357"},"PeriodicalIF":7.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9734760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mar J F Ollero, Eduardo Estrada, Michael D Hunter, Pablo F Cáncer
People show stable differences in the way their affect fluctuates over time. Within the general framework of dynamical systems, the damped linear oscillator (DLO) model has been proposed as a useful approach to study affect dynamics. The DLO model can be applied to repeated measures provided by a single individual, and the resulting parameters can capture relevant features of the person's affect dynamics. Focusing on negative affect, we provide an accessible interpretation of the DLO model parameters in terms of emotional lability, resilience, and vulnerability. We conducted a Monte Carlo study to test the DLO model performance under different empirically relevant conditions in terms of individual characteristics and sampling scheme. We used state-space models in continuous time. The results show that, under certain conditions, the DLO model is able to accurately and efficiently recover the parameters underlying the affective dynamics of a single individual. We discuss the results and the theoretical and practical implications of using this model, illustrate how to use it for studying psychological phenomena at the individual level, and provide specific recommendations on how to collect data for this purpose. We also provide a tutorial website and computer code in R to implement this approach. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Characterizing affect dynamics with a damped linear oscillator model: Theoretical considerations and recommendations for individual-level applications.","authors":"Mar J F Ollero, Eduardo Estrada, Michael D Hunter, Pablo F Cáncer","doi":"10.1037/met0000615","DOIUrl":"https://doi.org/10.1037/met0000615","url":null,"abstract":"<p><p>People show stable differences in the way their affect fluctuates over time. Within the general framework of dynamical systems, the damped linear oscillator (DLO) model has been proposed as a useful approach to study affect dynamics. The DLO model can be applied to repeated measures provided by a single individual, and the resulting parameters can capture relevant features of the person's affect dynamics. Focusing on negative affect, we provide an accessible interpretation of the DLO model parameters in terms of emotional lability, resilience, and vulnerability. We conducted a Monte Carlo study to test the DLO model performance under different empirically relevant conditions in terms of individual characteristics and sampling scheme. We used state-space models in continuous time. The results show that, under certain conditions, the DLO model is able to accurately and efficiently recover the parameters underlying the affective dynamics of a single individual. We discuss the results and the theoretical and practical implications of using this model, illustrate how to use it for studying psychological phenomena at the individual level, and provide specific recommendations on how to collect data for this purpose. We also provide a tutorial website and computer code in R to implement this approach. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41238100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-07-20DOI: 10.1037/met0000592
Lara Oeltjen, Tobias Koch, Jana Holtmann, Fabian F Münch, Michael Eid, Fridtjof W Nussbeck
Latent state-trait (LST) models are increasingly applied in psychology. Although existing LST models offer many possibilities for analyzing variability and change, they do not allow researchers to relate time-varying or time-invariant covariates, or a combination of both, to loading, intercept, and factor variance parameters in LST models. We present a general framework for the inclusion of nominal and/or continuous time-varying and time-invariant covariates in LST models. The new framework builds on modern LST theory and Bayesian moderated nonlinear factor analysis and is termed moderated nonlinear LST (MN-LST) framework. The MN-LST framework offers new modeling possibilities and allows for a fine-grained analysis of trait change, person-by-situation interaction effects, as well as inter- or intraindividual variability. The new MN-LST approach is compared to alternative modeling strategies. The advantages of the MN-LST approach are illustrated in an empirical application examining dyadic coping in romantic relationships. Finally, the advantages and limitations of the approach are discussed, and practical recommendations are provided. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"A general framework for the inclusion of time-varying and time-invariant covariates in latent state-trait models.","authors":"Lara Oeltjen, Tobias Koch, Jana Holtmann, Fabian F Münch, Michael Eid, Fridtjof W Nussbeck","doi":"10.1037/met0000592","DOIUrl":"10.1037/met0000592","url":null,"abstract":"<p><p>Latent state-trait (LST) models are increasingly applied in psychology. Although existing LST models offer many possibilities for analyzing variability and change, they do not allow researchers to relate time-varying or time-invariant covariates, or a combination of both, to loading, intercept, and factor variance parameters in LST models. We present a general framework for the inclusion of nominal and/or continuous time-varying and time-invariant covariates in LST models. The new framework builds on modern LST theory and Bayesian moderated nonlinear factor analysis and is termed moderated nonlinear LST (MN-LST) framework. The MN-LST framework offers new modeling possibilities and allows for a fine-grained analysis of trait change, person-by-situation interaction effects, as well as inter- or intraindividual variability. The new MN-LST approach is compared to alternative modeling strategies. The advantages of the MN-LST approach are illustrated in an empirical application examining dyadic coping in romantic relationships. Finally, the advantages and limitations of the approach are discussed, and practical recommendations are provided. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1005-1028"},"PeriodicalIF":7.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9893127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2022-04-21DOI: 10.1037/met0000481
Andreas B Neubauer, Annette Brose, Florian Schmiedek
Various theoretical accounts suggest that within-person effects relating to everyday experiences (assessed, e.g., via experience sampling studies or daily diary studies) are a central element for understanding between-person differences in future outcomes. In this regard, it is often assumed that the within-person effect of a time-varying predictor X on a time-varying mediator M contributes to the long-term development in an outcome variable Y. In the present work, we demonstrate that traditional multilevel mediation approaches fall short in capturing the proposed mechanism, however. We suggest that a model in which between-person differences in the strength of within-person effects predict the outcome Y mediated via mean levels in M more adequately aligns with the presumed theoretical account that within-person effects shape between-person differences. Using simulated data, we show that the central parameters of this multilevel structural equation model can be recovered well in most of the investigated scenarios. Our approach has important implications for whether or not to control for mean levels in models with within-person effects as predictors. We illustrate the model using empirical data targeting the question if the within-person association of occurrence of daily stressors (X) with daily experiences of negative affect (M) longitudinally predicts between-person differences in change in depressive symptoms (Y). Implications for other multilevel designs and intervention studies are discussed. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"How within-person effects shape between-person differences: A multilevel structural equation modeling perspective.","authors":"Andreas B Neubauer, Annette Brose, Florian Schmiedek","doi":"10.1037/met0000481","DOIUrl":"https://doi.org/10.1037/met0000481","url":null,"abstract":"<p><p>Various theoretical accounts suggest that within-person effects relating to everyday experiences (assessed, e.g., via experience sampling studies or daily diary studies) are a central element for understanding between-person differences in future outcomes. In this regard, it is often assumed that the within-person effect of a time-varying predictor X on a time-varying mediator M contributes to the long-term development in an outcome variable Y. In the present work, we demonstrate that traditional multilevel mediation approaches fall short in capturing the proposed mechanism, however. We suggest that a model in which between-person differences in the strength of within-person effects predict the outcome Y mediated via mean levels in M more adequately aligns with the presumed theoretical account that within-person effects shape between-person differences. Using simulated data, we show that the central parameters of this multilevel structural equation model can be recovered well in most of the investigated scenarios. Our approach has important implications for whether or not to control for mean levels in models with within-person effects as predictors. We illustrate the model using empirical data targeting the question if the within-person association of occurrence of daily stressors (X) with daily experiences of negative affect (M) longitudinally predicts between-person differences in change in depressive symptoms (Y). Implications for other multilevel designs and intervention studies are discussed. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 5","pages":"1069-1086"},"PeriodicalIF":7.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41210715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}