Pub Date : 2025-01-15DOI: 10.1080/00273171.2024.2436413
Oisín Ryan, Jonas M B Haslbeck, Lourens J Waldorp
Time series analysis is increasingly popular across scientific domains. A key concept in time series analysis is stationarity, the stability of statistical properties of a time series. Understanding stationarity is crucial to addressing frequent issues in time series analysis such as the consequences of failing to model non-stationarity, how to determine the mechanisms generating non-stationarity, and consequently how to model those mechanisms (i.e., by differencing or detrending). However, many empirical researchers have a limited understanding of stationarity, which can lead to the use of incorrect research practices and misleading substantive conclusions. In this paper, we address this problem by answering these questions in an accessible way. To this end, we study how researchers can use detrending and differencing to model trends in time series analysis. We show via simulation the consequences of modeling trends inappropriately, and evaluate the performance of one popular approach to distinguish different trend types in empirical data. We present these results in an accessible way, providing an extensive introduction to key concepts in time series analysis, illustrated throughout with simple examples. Finally, we discuss a number of take-home messages and extensions to standard approaches, which directly address more complex time-series analysis problems encountered by empirical researchers.
{"title":"Non-Stationarity in Time-Series Analysis: Modeling Stochastic and Deterministic Trends.","authors":"Oisín Ryan, Jonas M B Haslbeck, Lourens J Waldorp","doi":"10.1080/00273171.2024.2436413","DOIUrl":"10.1080/00273171.2024.2436413","url":null,"abstract":"<p><p>Time series analysis is increasingly popular across scientific domains. A key concept in time series analysis is stationarity, the stability of statistical properties of a time series. Understanding stationarity is crucial to addressing frequent issues in time series analysis such as the consequences of failing to model non-stationarity, how to determine the mechanisms generating non-stationarity, and consequently how to model those mechanisms (i.e., by differencing or detrending). However, many empirical researchers have a limited understanding of stationarity, which can lead to the use of incorrect research practices and misleading substantive conclusions. In this paper, we address this problem by answering these questions in an accessible way. To this end, we study how researchers can use detrending and differencing to model trends in time series analysis. We show <i>via</i> simulation the consequences of modeling trends inappropriately, and evaluate the performance of one popular approach to distinguish different trend types in empirical data. We present these results in an accessible way, providing an extensive introduction to key concepts in time series analysis, illustrated throughout with simple examples. Finally, we discuss a number of take-home messages and extensions to standard approaches, which directly address more complex time-series analysis problems encountered by empirical researchers.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-33"},"PeriodicalIF":5.3,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.1080/00273171.2024.2444949
Xiao Liu, Mark Eddy, Charles R Martinez
When studying effect heterogeneity between different subgroups (i.e., moderation), researchers are frequently interested in the mediation mechanisms underlying the heterogeneity, that is, the mediated moderation. For assessing mediated moderation, conventional methods typically require parametric models to define mediated moderation, which has limitations when parametric models may be misspecified and when causal interpretation is of interest. For causal interpretations about mediation, causal mediation analysis is increasingly popular but is underdeveloped for mediated moderation analysis. In this study, we extend the causal mediation literature, and we propose a novel method for mediated moderation analysis. Using the potential outcomes framework, we obtain two causal estimands that decompose the total moderation: (i) the mediated moderation attributable to a mediator and (ii) the remaining moderation unattributable to the mediator. We also develop a multiply robust estimation method for the mediated moderation analysis, which can incorporate machine learning methods in the inference of the causal estimands. We evaluate the proposed method through simulations. We illustrate the proposed mediated moderation analysis by assessing the mediation mechanism that underlies the gender difference in the effect of a preventive intervention on adolescent behavioral outcomes.
{"title":"Causal Estimands and Multiply Robust Estimation of Mediated-Moderation.","authors":"Xiao Liu, Mark Eddy, Charles R Martinez","doi":"10.1080/00273171.2024.2444949","DOIUrl":"https://doi.org/10.1080/00273171.2024.2444949","url":null,"abstract":"<p><p>When studying effect heterogeneity between different subgroups (i.e., moderation), researchers are frequently interested in the mediation mechanisms underlying the heterogeneity, that is, the mediated moderation. For assessing mediated moderation, conventional methods typically require parametric models to define mediated moderation, which has limitations when parametric models may be misspecified and when causal interpretation is of interest. For causal interpretations about mediation, causal mediation analysis is increasingly popular but is underdeveloped for mediated moderation analysis. In this study, we extend the causal mediation literature, and we propose a novel method for mediated moderation analysis. Using the potential outcomes framework, we obtain two causal estimands that decompose the total moderation: (i) the mediated moderation attributable to a mediator and (ii) the remaining moderation unattributable to the mediator. We also develop a multiply robust estimation method for the mediated moderation analysis, which can incorporate machine learning methods in the inference of the causal estimands. We evaluate the proposed method through simulations. We illustrate the proposed mediated moderation analysis by assessing the mediation mechanism that underlies the gender difference in the effect of a preventive intervention on adolescent behavioral outcomes.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-27"},"PeriodicalIF":5.3,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142973231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-27DOI: 10.1080/00273171.2024.2436418
Lan Luo, Kathleen M Gates, Kenneth A Bollen
We present the R package MIIVefa, designed to implement the MIIV-EFA algorithm. This algorithm explores and identifies the underlying factor structure within a set of variables. The resulting model is not a typical exploratory factor analysis (EFA) model because some loadings are fixed to zero and it allows users to include hypothesized correlated errors such as might occur with longitudinal data. As such, it resembles a confirmatory factor analysis (CFA) model. But, unlike CFA, the MIIV-EFA algorithm determines the number of factors and the items that load on these factors directly from the data. We provide both simulation and empirical examples to illustrate the application of MIIVefa and discuss its benefits and limitations.
{"title":"MIIVefa: An R Package for a New Type of Exploratory Factor Anaylysis Using Model-Implied Instrumental Variables.","authors":"Lan Luo, Kathleen M Gates, Kenneth A Bollen","doi":"10.1080/00273171.2024.2436418","DOIUrl":"https://doi.org/10.1080/00273171.2024.2436418","url":null,"abstract":"<p><p>We present the R package MIIVefa, designed to implement the MIIV-EFA algorithm. This algorithm explores and identifies the underlying factor structure within a set of variables. The resulting model is not a typical exploratory factor analysis (EFA) model because some loadings are fixed to zero and it allows users to include hypothesized correlated errors such as might occur with longitudinal data. As such, it resembles a confirmatory factor analysis (CFA) model. But, unlike CFA, the MIIV-EFA algorithm determines the number of factors and the items that load on these factors directly from the data. We provide both simulation and empirical examples to illustrate the application of MIIVefa and discuss its benefits and limitations.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-9"},"PeriodicalIF":5.3,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-23DOI: 10.1080/00273171.2024.2436406
Inhan Kang
In this article, we propose latent variable models that jointly account for responses and response times (RTs) in multidimensional personality measurements. We address two key research questions regarding the latent structure of RT distributions through model comparisons. First, we decompose RT into decision and non-decision times by incorporating irreducible minimum shifts in RT distributions, as done in cognitive decision-making models. Second, we investigate whether the speed factor underlying decision times should be multidimensional with the same latent structure as personality traits, or, if a unidimensional speed factor suffices. Comprehensive model comparisons across four distinct datasets suggest that a joint model with person-specific parameters to account for shifts in RT distributions and a unidimensional speed factor provides the best account for ordinal responses and RTs. Posterior predictive checks further confirm these findings. Additionally, simulation studies validate the parameter recovery of the proposed models and support the empirical results. Most importantly, failing to account for the irreducible minimum shift in RT distributions leads to systematic biases in other model components and severe underestimation of the nonlinear relationship between responses and RTs.
{"title":"On the Latent Structure of Responses and Response Times from Multidimensional Personality Measurement with Ordinal Rating Scales.","authors":"Inhan Kang","doi":"10.1080/00273171.2024.2436406","DOIUrl":"https://doi.org/10.1080/00273171.2024.2436406","url":null,"abstract":"<p><p>In this article, we propose latent variable models that jointly account for responses and response times (RTs) in multidimensional personality measurements. We address two key research questions regarding the latent structure of RT distributions through model comparisons. First, we decompose RT into decision and non-decision times by incorporating irreducible minimum shifts in RT distributions, as done in cognitive decision-making models. Second, we investigate whether the speed factor underlying decision times should be multidimensional with the same latent structure as personality traits, or, if a unidimensional speed factor suffices. Comprehensive model comparisons across four distinct datasets suggest that a joint model with person-specific parameters to account for shifts in RT distributions and a unidimensional speed factor provides the best account for ordinal responses and RTs. Posterior predictive checks further confirm these findings. Additionally, simulation studies validate the parameter recovery of the proposed models and support the empirical results. Most importantly, failing to account for the irreducible minimum shift in RT distributions leads to systematic biases in other model components and severe underestimation of the nonlinear relationship between responses and RTs.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-30"},"PeriodicalIF":5.3,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-15DOI: 10.1080/00273171.2024.2436420
Anja F Ernst, Eva Ceulemans, Laura F Bringmann, Janne Adolf
Nowadays research into affect frequently employs intensive longitudinal data to assess fluctuations in daily emotional experiences. The resulting data are often analyzed with moderated autoregressive models to capture the influences of contextual events on the emotion dynamics. The presence of noise (e.g., measurement error) in the measures of the contextual events, however, is commonly ignored in these models. Disregarding noise in these covariates when it is present may result in biased parameter estimates and wrong conclusions drawn about the underlying emotion dynamics. In a simulation study we evaluate the estimation accuracy, assessed in terms of bias and variance, of different moderated autoregressive models in the presence of noise in the covariate. We show that estimation accuracy decreases when the amount of noise in the covariate increases. We also show that this bias is magnified by a larger effect of the covariate, a slower switching frequency of the covariate, a discrete rather than a continuous covariate, and constant rather than occasional noise in the covariate. We also show that the bias that results from a noisy covariate does not decrease when the number of observations increases. We end with a few recommendations for applying moderated autoregressive models based on our simulation.
{"title":"Evaluating Contextual Models for Intensive Longitudinal Data in the Presence of Noise.","authors":"Anja F Ernst, Eva Ceulemans, Laura F Bringmann, Janne Adolf","doi":"10.1080/00273171.2024.2436420","DOIUrl":"https://doi.org/10.1080/00273171.2024.2436420","url":null,"abstract":"<p><p>Nowadays research into affect frequently employs intensive longitudinal data to assess fluctuations in daily emotional experiences. The resulting data are often analyzed with moderated autoregressive models to capture the influences of contextual events on the emotion dynamics. The presence of noise (e.g., measurement error) in the measures of the contextual events, however, is commonly ignored in these models. Disregarding noise in these covariates when it is present may result in biased parameter estimates and wrong conclusions drawn about the underlying emotion dynamics. In a simulation study we evaluate the estimation accuracy, assessed in terms of bias and variance, of different moderated autoregressive models in the presence of noise in the covariate. We show that estimation accuracy decreases when the amount of noise in the covariate increases. We also show that this bias is magnified by a larger effect of the covariate, a slower switching frequency of the covariate, a discrete rather than a continuous covariate, and constant rather than occasional noise in the covariate. We also show that the bias that results from a noisy covariate does not decrease when the number of observations increases. We end with a few recommendations for applying moderated autoregressive models based on our simulation.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-21"},"PeriodicalIF":5.3,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11DOI: 10.1080/00273171.2024.2432918
Jannis Kreienkamp, Maximilian Agostini, Rei Monden, Kai Epstude, Peter de Jonge, Laura F Bringmann
Psychological researchers and practitioners collect increasingly complex time series data aimed at identifying differences between the developments of participants or patients. Past research has proposed a number of dynamic measures that describe meaningful developmental patterns for psychological data (e.g., instability, inertia, linear trend). Yet, commonly used clustering approaches are often not able to include these meaningful measures (e.g., due to model assumptions). We propose feature-based time series clustering as a flexible, transparent, and well-grounded approach that clusters participants based on the dynamic measures directly using common clustering algorithms. We introduce the approach and illustrate the utility of the method with real-world empirical data that highlight common ESM challenges of multivariate conceptualizations, structural missingness, and non-stationary trends. We use the data to showcase the main steps of input selection, feature extraction, feature reduction, feature clustering, and cluster evaluation. We also provide practical algorithm overviews and readily available code for data preparation, analysis, and interpretation.
{"title":"A Gentle Introduction and Application of Feature-Based Clustering with Psychological Time Series.","authors":"Jannis Kreienkamp, Maximilian Agostini, Rei Monden, Kai Epstude, Peter de Jonge, Laura F Bringmann","doi":"10.1080/00273171.2024.2432918","DOIUrl":"10.1080/00273171.2024.2432918","url":null,"abstract":"<p><p>Psychological researchers and practitioners collect increasingly complex time series data aimed at identifying differences between the developments of participants or patients. Past research has proposed a number of dynamic measures that describe meaningful developmental patterns for psychological data (e.g., instability, inertia, linear trend). Yet, commonly used clustering approaches are often not able to include these meaningful measures (e.g., due to model assumptions). We propose feature-based time series clustering as a flexible, transparent, and well-grounded approach that clusters participants based on the dynamic measures directly using common clustering algorithms. We introduce the approach and illustrate the utility of the method with real-world empirical data that highlight common ESM challenges of multivariate conceptualizations, structural missingness, and non-stationary trends. We use the data to showcase the main steps of input selection, feature extraction, feature reduction, feature clustering, and cluster evaluation. We also provide practical algorithm overviews and readily available code for data preparation, analysis, and interpretation.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-31"},"PeriodicalIF":5.3,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-09DOI: 10.1080/00273171.2024.2430630
Steven P Reise, Jared M Block, Maxwell Mansolf, Mark G Haviland, Benjamin D Schalet, Rachel Kimerling
The application of unidimensional IRT models requires item response data to be unidimensional. Often, however, item response data contain a dominant dimension, as well as one or more nuisance dimensions caused by content clusters. Applying a unidimensional IRT model to multidimensional data causes violations of local independence, which can vitiate IRT applications. To evaluate and, possibly, remedy the problems caused by forcing unidimensional models onto multidimensional data, we consider the creation of a projected unidimensional IRT model, where the multidimensionality caused by nuisance dimensions is controlled for by integrating them out from the model. Specifically, when item response data have a bifactor structure, one can create a unidimensional model based on projecting to the general factor. Importantly, the projected unidimensional IRT model can be used as a benchmark for comparison to a unidimensional model to judge the practical consequences of multidimensionality. Limitations of the proposed approach are detailed.
{"title":"Using Projective IRT to Evaluate the Effects of Multidimensionality on Unidimensional IRT Model Parameters.","authors":"Steven P Reise, Jared M Block, Maxwell Mansolf, Mark G Haviland, Benjamin D Schalet, Rachel Kimerling","doi":"10.1080/00273171.2024.2430630","DOIUrl":"https://doi.org/10.1080/00273171.2024.2430630","url":null,"abstract":"<p><p>The application of unidimensional IRT models requires item response data to be unidimensional. Often, however, item response data contain a dominant dimension, as well as one or more nuisance dimensions caused by content clusters. Applying a unidimensional IRT model to multidimensional data causes violations of local independence, which can vitiate IRT applications. To evaluate and, possibly, remedy the problems caused by forcing unidimensional models onto multidimensional data, we consider the creation of a projected unidimensional IRT model, where the multidimensionality caused by nuisance dimensions is controlled for by integrating them out from the model. Specifically, when item response data have a bifactor structure, one can create a unidimensional model based on projecting to the general factor. Importantly, the projected unidimensional IRT model can be used as a benchmark for comparison to a unidimensional model to judge the practical consequences of multidimensionality. Limitations of the proposed approach are detailed.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-17"},"PeriodicalIF":5.3,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-26DOI: 10.1080/00273171.2024.2428222
Lydia G Speyer, Xinxin Zhu, Yi Yang, Denis Ribeaud, Manuel Eisner
Random-intercept cross-lagged panel models (RI-CLPMs) are increasingly used to investigate research questions focusing on how one variable at one time point affects another variable at the subsequent time point. Due to the implied temporal sequence of events in such research designs, interpretations of RI-CLPMs primarily focus on longitudinal cross-lagged paths while disregarding concurrent associations and modeling these only as residual covariances. However, this may cause biased cross-lagged effects. This may be especially so when data collected at the same time point refers to different reference timeframes, creating a temporal sequence of events for constructs measured concurrently. To examine this issue, we conducted a series of empirical analyses in which the impact of modeling or not modeling of directional within-time point associations may impact inferences drawn from RI-CLPMs using data from the longitudinal z-proso study. Results highlight that not considering directional concurrent effects may lead to biased cross-lagged effects. Thus, it is essential to carefully consider potential directional concurrent effects when choosing models to analyze directional associations between variables over time. If temporal sequences of concurrent effects cannot be clearly established, testing multiple models and drawing conclusions based on the robustness of effects across all models is recommended.
{"title":"On the Importance of Considering Concurrent Effects in Random-Intercept Cross-Lagged Panel Modelling: Example Analysis of Bullying and Internalising Problems.","authors":"Lydia G Speyer, Xinxin Zhu, Yi Yang, Denis Ribeaud, Manuel Eisner","doi":"10.1080/00273171.2024.2428222","DOIUrl":"https://doi.org/10.1080/00273171.2024.2428222","url":null,"abstract":"<p><p>Random-intercept cross-lagged panel models (RI-CLPMs) are increasingly used to investigate research questions focusing on how one variable at one time point affects another variable at the subsequent time point. Due to the implied temporal sequence of events in such research designs, interpretations of RI-CLPMs primarily focus on longitudinal cross-lagged paths while disregarding concurrent associations and modeling these only as residual covariances. However, this may cause biased cross-lagged effects. This may be especially so when data collected at the same time point refers to different reference timeframes, creating a temporal sequence of events for constructs measured concurrently. To examine this issue, we conducted a series of empirical analyses in which the impact of modeling or not modeling of directional within-time point associations may impact inferences drawn from RI-CLPMs using data from the longitudinal z-proso study. Results highlight that not considering directional concurrent effects may lead to biased cross-lagged effects. Thus, it is essential to carefully consider potential directional concurrent effects when choosing models to analyze directional associations between variables over time. If temporal sequences of concurrent effects cannot be clearly established, testing multiple models and drawing conclusions based on the robustness of effects across all models is recommended.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-17"},"PeriodicalIF":5.3,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-18DOI: 10.1080/00273171.2024.2424514
Alessandro Varacca
In this paper, we propose a Bayesian causal mediation approach to the analysis of experimental data when both the outcome and the mediator are measured through structured questionnaires based on Likert-scaled inquiries. Our estimation strategy builds upon the error-in-variables literature and, specifically, it leverages Item Response Theory to explicitly model the observed surrogate mediator and outcome measures. We employ their elicited latent counterparts in a simple g-computation algorithm, where we exploit the fundamental identifying assumptions of causal mediation analysis to impute all the relevant counterfactuals and estimate the causal parameters of interest. We finally devise a sensitivity analysis procedure to test the robustness of the proposed methods to the restrictive requirement of mediator's conditional ignorability. We demonstrate the functioning of our proposed methodology through an empirical application using survey data from an online experiment on food purchasing intentions and the effect of different labeling regimes.
在本文中,我们提出了一种贝叶斯因果中介方法来分析实验数据,即通过基于李克特量表调查的结构化问卷来测量结果和中介。我们的估算策略建立在变量误差文献的基础上,具体来说,它利用项目反应理论(Item Response Theory)对观察到的中介变量和结果变量进行明确建模。我们在一个简单的 g 计算算法中使用了所激发的潜在对应变量,利用因果中介分析的基本识别假设来估算所有相关的反事实,并估算相关的因果参数。最后,我们设计了一个敏感性分析程序,以检验所提出的方法对中介人条件无知这一限制性要求的稳健性。我们通过一个关于食品购买意向和不同标签制度影响的在线实验调查数据的实证应用,证明了我们提出的方法的功能。
{"title":"Latently Mediating: A Bayesian Take on Causal Mediation Analysis with Structured Survey Data.","authors":"Alessandro Varacca","doi":"10.1080/00273171.2024.2424514","DOIUrl":"https://doi.org/10.1080/00273171.2024.2424514","url":null,"abstract":"<p><p>In this paper, we propose a Bayesian causal mediation approach to the analysis of experimental data when both the outcome and the mediator are measured through structured questionnaires based on Likert-scaled inquiries. Our estimation strategy builds upon the error-in-variables literature and, specifically, it leverages Item Response Theory to explicitly model the observed surrogate mediator and outcome measures. We employ their elicited latent counterparts in a simple g-computation algorithm, where we exploit the fundamental identifying assumptions of causal mediation analysis to impute all the relevant counterfactuals and estimate the causal parameters of interest. We finally devise a sensitivity analysis procedure to test the robustness of the proposed methods to the restrictive requirement of mediator's conditional ignorability. We demonstrate the functioning of our proposed methodology through an empirical application using survey data from an online experiment on food purchasing intentions and the effect of different labeling regimes.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-23"},"PeriodicalIF":5.3,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-02-10DOI: 10.1080/00273171.2023.2170963
L Lichtenberg, I Visser, M E J Raijmakers
This study is the first to investigate how 3-year-olds learn simple rules from feedback using the Toddler Card Sorting Task (TCST). To account for intra- and inter- individual differences in the learning process, latent Markov models were fitted to the time series of accuracy responses using maximum likelihood techniques (Visser et al., 2002). In a first, exploratory study (N = 110, 3- to 5-years olds) a considerable group of 3-year olds applied a hypothesis testing learning strategy. A second study confirmed these results with a preregistered study (3-years olds, N = 60). Under supportive learning conditions, a majority of 3-year- olds was capable of hypothesis testing. Furthermore, older children and those with bigger working memory capacities were more likely to use hypothesis testing, even though the latter group perseverated more than younger children or those with smaller working memory capacities. 3-year-olds are more advanced feedback-learners than assumed.
{"title":"Latent Markov Models to Test the Strategy Use of 3-Year-Olds in a Rule-Based Feedback-Learning Task.","authors":"L Lichtenberg, I Visser, M E J Raijmakers","doi":"10.1080/00273171.2023.2170963","DOIUrl":"10.1080/00273171.2023.2170963","url":null,"abstract":"<p><p>This study is the first to investigate how 3-year-olds learn simple rules from feedback using the Toddler Card Sorting Task (TCST). To account for intra- and inter- individual differences in the learning process, latent Markov models were fitted to the time series of accuracy responses using maximum likelihood techniques (Visser et al., 2002). In a first, exploratory study (N = 110, 3- to 5-years olds) a considerable group of 3-year olds applied a hypothesis testing learning strategy. A second study confirmed these results with a preregistered study (3-years olds, N = 60). Under supportive learning conditions, a majority of 3-year- olds was capable of hypothesis testing. Furthermore, older children and those with bigger working memory capacities were more likely to use hypothesis testing, even though the latter group perseverated more than younger children or those with smaller working memory capacities. 3-year-olds are more advanced feedback-learners than assumed.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1123-1136"},"PeriodicalIF":5.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10675513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}