Pub Date : 2024-07-01Epub Date: 2024-07-10DOI: 10.1080/00273171.2024.2315557
Benedikt Langenberg, Jonathan L Helm, Axel Mayer
Latent repeated measures ANOVA (L-RM-ANOVA) has recently been proposed as an alternative to traditional repeated measures ANOVA. L-RM-ANOVA builds upon structural equation modeling and enables researchers to investigate interindividual differences in main/interaction effects, examine custom contrasts, incorporate a measurement model, and account for missing data. However, L-RM-ANOVA uses maximum likelihood and thus cannot incorporate prior information and can have poor statistical properties in small samples. We show how L-RM-ANOVA can be used with Bayesian estimation to resolve the aforementioned issues. We demonstrate how to place informative priors on model parameters that constitute main and interaction effects. We further show how to place weakly informative priors on standardized parameters which can be used when no prior information is available. We conclude that Bayesian estimation can lower Type 1 error and bias, and increase power and efficiency when priors are chosen adequately. We demonstrate the approach using a real empirical example and guide the readers through specification of the model. We argue that ANOVA tables and incomplete descriptive statistics are not sufficient information to specify informative priors, and we identify which parameter estimates should be reported in future research; thereby promoting cumulative research.
{"title":"Bayesian Analysis of Multi-Factorial Experimental Designs Using SEM.","authors":"Benedikt Langenberg, Jonathan L Helm, Axel Mayer","doi":"10.1080/00273171.2024.2315557","DOIUrl":"10.1080/00273171.2024.2315557","url":null,"abstract":"<p><p>Latent repeated measures ANOVA (L-RM-ANOVA) has recently been proposed as an alternative to traditional repeated measures ANOVA. L-RM-ANOVA builds upon structural equation modeling and enables researchers to investigate interindividual differences in main/interaction effects, examine custom contrasts, incorporate a measurement model, and account for missing data. However, L-RM-ANOVA uses maximum likelihood and thus cannot incorporate prior information and can have poor statistical properties in small samples. We show how L-RM-ANOVA can be used with Bayesian estimation to resolve the aforementioned issues. We demonstrate how to place informative priors on model parameters that constitute main and interaction effects. We further show how to place weakly informative priors on standardized parameters which can be used when no prior information is available. We conclude that Bayesian estimation can lower Type 1 error and bias, and increase power and efficiency when priors are chosen adequately. We demonstrate the approach using a real empirical example and guide the readers through specification of the model. We argue that ANOVA tables and incomplete descriptive statistics are not sufficient information to specify informative priors, and we identify which parameter estimates should be reported in future research; thereby promoting cumulative research.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"716-737"},"PeriodicalIF":5.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-11DOI: 10.1080/00273171.2024.2335411
Sebastian Kueppers, Richard Rau, Florian Scharf
Mobile applications offer a wide range of opportunities for psychological data collection, such as increased ecological validity and greater acceptance by participants compared to traditional laboratory studies. However, app-based psychological data also pose data-analytic challenges because of the complexities introduced by missingness and interdependence of observations. Consequently, researchers must weigh the advantages and disadvantages of app-based data collection to decide on the scientific utility of their proposed app study. For instance, some studies might only be worthwhile if they provide adequate statistical power. However, the complexity of app data forestalls the use of simple analytic formulas to estimate properties such as power. In this paper, we demonstrate how Monte Carlo simulations can be used to investigate the impact of app usage behavior on the utility of app-based psychological data. We introduce a set of questions to guide simulation implementation and showcase how we answered them for the simulation in the context of the guessing game app Who Knows (Rau et al., 2023). Finally, we give a brief overview of the simulation results and the conclusions we have drawn from them for real-world data generation. Our results can serve as an example of how to use a simulation approach for planning real-world app-based data collection.
{"title":"Using Monte Carlo Simulation to Forecast the Scientific Utility of Psychological App Studies: A Tutorial.","authors":"Sebastian Kueppers, Richard Rau, Florian Scharf","doi":"10.1080/00273171.2024.2335411","DOIUrl":"10.1080/00273171.2024.2335411","url":null,"abstract":"<p><p>Mobile applications offer a wide range of opportunities for psychological data collection, such as increased ecological validity and greater acceptance by participants compared to traditional laboratory studies. However, app-based psychological data also pose data-analytic challenges because of the complexities introduced by missingness and interdependence of observations. Consequently, researchers must weigh the advantages and disadvantages of app-based data collection to decide on the scientific utility of their proposed app study. For instance, some studies might only be worthwhile if they provide adequate statistical power. However, the complexity of app data forestalls the use of simple analytic formulas to estimate properties such as power. In this paper, we demonstrate how Monte Carlo simulations can be used to investigate the impact of app usage behavior on the utility of app-based psychological data. We introduce a set of questions to guide simulation implementation and showcase how we answered them for the simulation in the context of the guessing game app <i>Who Knows</i> (Rau et al., 2023). Finally, we give a brief overview of the simulation results and the conclusions we have drawn from them for real-world data generation. Our results can serve as an example of how to use a simulation approach for planning real-world app-based data collection.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"879-893"},"PeriodicalIF":5.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-04-01DOI: 10.1080/00273171.2024.2318784
Samantha F Anderson
Researchers are often interested in comparing predictors, a practice commonly done via informal comparisons of standardized regression slopes. However, formal interval-based approaches offer advantages over informal comparison. Specifically, this article examines a delta-method-based confidence interval for the difference between two standardized regression coefficients, building upon previous work on confidence intervals for single coefficients. Using Monte Carlo simulation studies, the proposed approach is evaluated at finite sample sizes with respect to coverage rate, interval width, Type I error rate, and statistical power under a variety of conditions, and is shown to outperform an alternative approach that uses the standard covariance matrix found in regression textbooks. Additional simulations evaluate current software implementations, small sample performance, and multiple comparison procedures for simultaneously testing multiple differences of interest. Guidance on sample size planning for narrow confidence intervals, an R function to conduct the proposed method, and two empirical demonstrations are provided. The goal is to offer researchers a different tool in their toolbox for when comparisons among standardized coefficients are desired, as a supplement to, rather than a replacement for, other potentially useful analyses.
研究人员通常对比较预测因子感兴趣,这种做法通常是通过标准化回归斜率的非正式比较来实现的。然而,与非正式比较相比,基于正式区间的方法更具优势。具体来说,本文在以往研究单一系数置信区间的基础上,研究了基于 delta 方法的两个标准化回归系数之差的置信区间。通过蒙特卡罗模拟研究,在有限样本量下对所提出的方法进行了覆盖率、区间宽度、I 类错误率和各种条件下的统计能力评估,结果表明该方法优于使用回归教科书中标准协方差矩阵的替代方法。其他模拟还评估了当前的软件实施、小样本性能以及同时测试多个相关差异的多重比较程序。此外,还提供了针对窄置信区间的样本量规划指导、用于执行建议方法的 R 函数以及两个经验演示。我们的目标是为研究人员提供一个不同的工具箱,以便在需要比较标准化系数时,作为其他潜在有用分析的补充而不是替代。
{"title":"A Confidence Interval for the Difference Between Standardized Regression Coefficients.","authors":"Samantha F Anderson","doi":"10.1080/00273171.2024.2318784","DOIUrl":"10.1080/00273171.2024.2318784","url":null,"abstract":"<p><p>Researchers are often interested in comparing predictors, a practice commonly done <i>via</i> informal comparisons of standardized regression slopes. However, formal interval-based approaches offer advantages over informal comparison. Specifically, this article examines a delta-method-based confidence interval for the difference between two standardized regression coefficients, building upon previous work on confidence intervals for single coefficients. Using Monte Carlo simulation studies, the proposed approach is evaluated at finite sample sizes with respect to coverage rate, interval width, Type I error rate, and statistical power under a variety of conditions, and is shown to outperform an alternative approach that uses the standard covariance matrix found in regression textbooks. Additional simulations evaluate current software implementations, small sample performance, and multiple comparison procedures for simultaneously testing multiple differences of interest. Guidance on sample size planning for narrow confidence intervals, an R function to conduct the proposed method, and two empirical demonstrations are provided. The goal is to offer researchers a different tool in their toolbox for when comparisons among standardized coefficients are desired, as a supplement to, rather than a replacement for, other potentially useful analyses.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"758-780"},"PeriodicalIF":5.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-05-24DOI: 10.1080/00273171.2024.2335391
Dalila Failli, Maria Francesca Marino, Francesca Martella
Networks consist of interconnected units, known as nodes, and allow to formally describe interactions within a system. Specifically, bipartite networks depict relationships between two distinct sets of nodes, designated as sending and receiving nodes. An integral aspect of bipartite network analysis often involves identifying clusters of nodes with similar behaviors. The computational complexity of models for large bipartite networks poses a challenge. To mitigate this challenge, we employ a Mixture of Latent Trait Analyzers (MLTA) for node clustering. Our approach extends the MLTA to include covariates and introduces a double EM algorithm for estimation. Applying our method to COVID-19 data, with sending nodes representing patients and receiving nodes representing preventive measures, enables dimensionality reduction and the identification of meaningful groups. We present simulation results demonstrating the accuracy of the proposed method.
网络由相互连接的单元(称为节点)组成,可以正式描述系统内的互动关系。具体来说,双向网络描述了两组不同节点之间的关系,分别称为发送节点和接收节点。双向网络分析的一个重要方面通常是识别具有相似行为的节点群。大型双节点网络模型的计算复杂性是一个挑战。为了减轻这一挑战,我们采用了潜在特质分析器混合物(MLTA)来进行节点聚类。我们的方法将 MLTA 扩展到了协变量,并引入了双 EM 算法进行估计。将我们的方法应用于 COVID-19 数据(发送节点代表患者,接收节点代表预防措施),可以降低维度并识别有意义的群体。我们展示了模拟结果,证明了所提方法的准确性。
{"title":"Finite Mixtures of Latent Trait Analyzers With Concomitant Variables for Bipartite Networks: An Analysis of COVID-19 Data.","authors":"Dalila Failli, Maria Francesca Marino, Francesca Martella","doi":"10.1080/00273171.2024.2335391","DOIUrl":"10.1080/00273171.2024.2335391","url":null,"abstract":"<p><p>Networks consist of interconnected units, known as nodes, and allow to formally describe interactions within a system. Specifically, bipartite networks depict relationships between two distinct sets of nodes, designated as sending and receiving nodes. An integral aspect of bipartite network analysis often involves identifying clusters of nodes with similar behaviors. The computational complexity of models for large bipartite networks poses a challenge. To mitigate this challenge, we employ a Mixture of Latent Trait Analyzers (MLTA) for node clustering. Our approach extends the MLTA to include covariates and introduces a double EM algorithm for estimation. Applying our method to COVID-19 data, with sending nodes representing patients and receiving nodes representing preventive measures, enables dimensionality reduction and the identification of meaningful groups. We present simulation results demonstrating the accuracy of the proposed method.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"801-817"},"PeriodicalIF":5.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141088916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The network approach to psychopathology, which assesses associations between individual symptoms, has recently been applied to evaluate treatments for mental disorders. While various options for conducting network analyses in intervention research exist, an overview and an evaluation of the various approaches are currently missing. Therefore, we conducted a review on network analyses in intervention research. Studies were included if they constructed a symptom network, analyzed data that were collected before, during or after treatment of a mental disorder, and yielded information about the treatment effect. The 56 included studies were reviewed regarding their methodological and analytic strategies. About half of the studies based on data from randomized trials conducted a network intervention analysis, while the other half compared networks between treatment groups. The majority of studies estimated cross-sectional networks, even when repeated measures were available. All but five studies investigated networks on the group level. This review highlights that current methodological practices limit the information that can be gained through network analyses in intervention research. We discuss the strength and limitations of certain methodological and analytic strategies and propose that further work is needed to use the full potential of the network approach in intervention research.
{"title":"Methodological and Statistical Practices of Using Symptom Networks to Evaluate Mental Health Interventions: A Review and Reflections.","authors":"Lea Schumacher, Julian Burger, Jette Echterhoff, Levente Kriston","doi":"10.1080/00273171.2024.2335401","DOIUrl":"10.1080/00273171.2024.2335401","url":null,"abstract":"<p><p>The network approach to psychopathology, which assesses associations between individual symptoms, has recently been applied to evaluate treatments for mental disorders. While various options for conducting network analyses in intervention research exist, an overview and an evaluation of the various approaches are currently missing. Therefore, we conducted a review on network analyses in intervention research. Studies were included if they constructed a symptom network, analyzed data that were collected before, during or after treatment of a mental disorder, and yielded information about the treatment effect. The 56 included studies were reviewed regarding their methodological and analytic strategies. About half of the studies based on data from randomized trials conducted a network intervention analysis, while the other half compared networks between treatment groups. The majority of studies estimated cross-sectional networks, even when repeated measures were available. All but five studies investigated networks on the group level. This review highlights that current methodological practices limit the information that can be gained through network analyses in intervention research. We discuss the strength and limitations of certain methodological and analytic strategies and propose that further work is needed to use the full potential of the network approach in intervention research.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"663-676"},"PeriodicalIF":5.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140908804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-05-11DOI: 10.1080/00273171.2024.2337340
Xynthia Kavelaars, Joris Mulder, Maurits Kaptein
The effects of treatments may differ between persons with different characteristics. Addressing such treatment heterogeneity is crucial to investigate whether patients with specific characteristics are likely to benefit from a new treatment. The current paper presents a novel Bayesian method for superiority decision-making in the context of randomized controlled trials with multivariate binary responses and heterogeneous treatment effects. The framework is based on three elements: a) Bayesian multivariate logistic regression analysis with a Pólya-Gamma expansion; b) a transformation procedure to transfer obtained regression coefficients to a more intuitive multivariate probability scale (i.e., success probabilities and the differences between them); and c) a compatible decision procedure for treatment comparison with prespecified decision error rates. Procedures for a priori sample size estimation under a non-informative prior distribution are included. A numerical evaluation demonstrated that decisions based on a priori sample size estimation resulted in anticipated error rates among the trial population as well as subpopulations. Further, average and conditional treatment effect parameters could be estimated unbiasedly when the sample was large enough. Illustration with the International Stroke Trial dataset revealed a trend toward heterogeneous effects among stroke patients: Something that would have remained undetected when analyses were limited to average treatment effects.
{"title":"Bayesian Multivariate Logistic Regression for Superiority and Inferiority Decision-Making under Observable Treatment Heterogeneity.","authors":"Xynthia Kavelaars, Joris Mulder, Maurits Kaptein","doi":"10.1080/00273171.2024.2337340","DOIUrl":"10.1080/00273171.2024.2337340","url":null,"abstract":"<p><p>The effects of treatments may differ between persons with different characteristics. Addressing such treatment heterogeneity is crucial to investigate whether patients with specific characteristics are likely to benefit from a new treatment. The current paper presents a novel Bayesian method for superiority decision-making in the context of randomized controlled trials with multivariate binary responses and heterogeneous treatment effects. The framework is based on three elements: a) Bayesian multivariate logistic regression analysis with a Pólya-Gamma expansion; b) a transformation procedure to transfer obtained regression coefficients to a more intuitive multivariate probability scale (i.e., success probabilities and the differences between them); and c) a compatible decision procedure for treatment comparison with prespecified decision error rates. Procedures for a priori sample size estimation under a non-informative prior distribution are included. A numerical evaluation demonstrated that decisions based on a priori sample size estimation resulted in anticipated error rates among the trial population as well as subpopulations. Further, average and conditional treatment effect parameters could be estimated unbiasedly when the sample was large enough. Illustration with the International Stroke Trial dataset revealed a trend toward heterogeneous effects among stroke patients: Something that would have remained undetected when analyses were limited to average treatment effects.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"859-882"},"PeriodicalIF":5.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11548885/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140908832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-19DOI: 10.1080/00273171.2024.2310426
Youngjin Han, Yang Liu, Ji Seung Yang
{"title":"Information Matrix Test for Item Response Models Using Stochastic Approximation.","authors":"Youngjin Han, Yang Liu, Ji Seung Yang","doi":"10.1080/00273171.2024.2310426","DOIUrl":"10.1080/00273171.2024.2310426","url":null,"abstract":"","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"651-653"},"PeriodicalIF":3.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139900890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-20DOI: 10.1080/00273171.2024.2307529
Xiao Liu
Propensity score (PS) analyses are increasingly popular in behavioral sciences. Two issues often add complexities to PS analyses, including missing data in observed covariates and clustered data structure. In previous research, methods for conducting PS analyses with considering either issue alone were examined. In practice, the two issues often co-occur; but the performance of methods for PS analyses in the presence of both issues has not been evaluated previously. In this study, we consider PS weighting analysis when data are clustered and observed covariates have missing values. A simulation study is conducted to evaluate the performance of different missing data handling methods (complete-case, single-level imputation, or multilevel imputation) combined with different multilevel PS weighting methods (fixed- or random-effects PS models, inverse-propensity-weighting or the clustered weighting, weighted single-level or multilevel outcome models). The results suggest that the bias in average treatment effect estimation can be reduced, by better accounting for clustering in both the missing data handling stage (such as with the multilevel imputation) and the PS analysis stage (such as with the fixed-effects PS model, clustered weighting, and weighted multilevel outcome model). A real-data example is provided for illustration.
{"title":"Propensity Score Weighting with Missing Data on Covariates and Clustered Data Structure.","authors":"Xiao Liu","doi":"10.1080/00273171.2024.2307529","DOIUrl":"10.1080/00273171.2024.2307529","url":null,"abstract":"<p><p>Propensity score (PS) analyses are increasingly popular in behavioral sciences. Two issues often add complexities to PS analyses, including missing data in observed covariates and clustered data structure. In previous research, methods for conducting PS analyses with considering either issue alone were examined. In practice, the two issues often co-occur; but the performance of methods for PS analyses in the presence of both issues has not been evaluated previously. In this study, we consider PS weighting analysis when data are clustered and observed covariates have missing values. A simulation study is conducted to evaluate the performance of different missing data handling methods (complete-case, single-level imputation, or multilevel imputation) combined with different multilevel PS weighting methods (fixed- or random-effects PS models, inverse-propensity-weighting or the clustered weighting, weighted single-level or multilevel outcome models). The results suggest that the bias in average treatment effect estimation can be reduced, by better accounting for clustering in both the missing data handling stage (such as with the multilevel imputation) and the PS analysis stage (such as with the fixed-effects PS model, clustered weighting, and weighted multilevel outcome model). A real-data example is provided for illustration.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"411-433"},"PeriodicalIF":3.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139914051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-13DOI: 10.1080/00273171.2023.2288577
Marie Beisemann, Boris Forthmann, Philipp Doebler
In psychology and education, tests (e.g., reading tests) and self-reports (e.g., clinical questionnaires) generate counts, but corresponding Item Response Theory (IRT) methods are underdeveloped compared to binary data. Recent advances include the Two-Parameter Conway-Maxwell-Poisson model (2PCMPM), generalizing Rasch's Poisson Counts Model, with item-specific difficulty, discrimination, and dispersion parameters. Explaining differences in model parameters informs item construction and selection but has received little attention. We introduce two 2PCMPM-based explanatory count IRT models: The Distributional Regression Test Model for item covariates, and the Count Latent Regression Model for (categorical) person covariates. Estimation methods are provided and satisfactory statistical properties are observed in simulations. Two examples illustrate how the models help understand tests and underlying constructs.
{"title":"Understanding Ability and Reliability Differences Measured with Count Items: The Distributional Regression Test Model and the Count Latent Regression Model.","authors":"Marie Beisemann, Boris Forthmann, Philipp Doebler","doi":"10.1080/00273171.2023.2288577","DOIUrl":"10.1080/00273171.2023.2288577","url":null,"abstract":"<p><p>In psychology and education, tests (e.g., reading tests) and self-reports (e.g., clinical questionnaires) generate counts, but corresponding Item Response Theory (IRT) methods are underdeveloped compared to binary data. Recent advances include the Two-Parameter Conway-Maxwell-Poisson model (2PCMPM), generalizing Rasch's Poisson Counts Model, with item-specific difficulty, discrimination, and dispersion parameters. Explaining differences in model parameters informs item construction and selection but has received little attention. We introduce two 2PCMPM-based explanatory count IRT models: The Distributional Regression Test Model for item covariates, and the Count Latent Regression Model for (categorical) person covariates. Estimation methods are provided and satisfactory statistical properties are observed in simulations. Two examples illustrate how the models help understand tests and underlying constructs.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"502-522"},"PeriodicalIF":3.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-02-14DOI: 10.1080/00273171.2023.2288575
Lydia Gabriela Speyer, Aja Louise Murray, Rogier Kievit
Recent technological advances have provided new opportunities for the collection of intensive longitudinal data. Using methods such as dynamic structural equation modeling, these data can provide new insights into moment-to-moment dynamics of psychological and behavioral processes. In intensive longitudinal data (t > 20), researchers often have theories that imply that factors that change from moment to moment within individuals act as moderators. For instance, a person's level of sleep deprivation may affect how much an external stressor affects mood. Here, we describe how researchers can implement, test, and interpret dynamically changing within-person moderation effects using two-level dynamic structural equation modeling as implemented in the structural equation modeling software Mplus. We illustrate the analysis of within-person moderation effects using an empirical example investigating whether changes in spending time online using social media affect the moment-to-moment effect of loneliness on depressive symptoms, and highlight avenues for future methodological development. We provide annotated Mplus code, enabling researchers to better isolate, estimate, and interpret the complexities of within-person interaction effects.
{"title":"Investigating Moderation Effects at the Within-Person Level Using Intensive Longitudinal Data: A Two-Level Dynamic Structural Equation Modelling Approach in Mplus.","authors":"Lydia Gabriela Speyer, Aja Louise Murray, Rogier Kievit","doi":"10.1080/00273171.2023.2288575","DOIUrl":"10.1080/00273171.2023.2288575","url":null,"abstract":"<p><p>Recent technological advances have provided new opportunities for the collection of intensive longitudinal data. Using methods such as dynamic structural equation modeling, these data can provide new insights into moment-to-moment dynamics of psychological and behavioral processes. In intensive longitudinal data (<i>t</i> > 20), researchers often have theories that imply that factors that change from moment to moment within individuals act as moderators. For instance, a person's level of sleep deprivation may affect how much an external stressor affects mood. Here, we describe how researchers can implement, test, and interpret dynamically changing within-person moderation effects using two-level dynamic structural equation modeling as implemented in the structural equation modeling software Mplus. We illustrate the analysis of within-person moderation effects using an empirical example investigating whether changes in spending time online using social media affect the moment-to-moment effect of loneliness on depressive symptoms, and highlight avenues for future methodological development. We provide annotated Mplus code, enabling researchers to better isolate, estimate, and interpret the complexities of within-person interaction effects.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"620-637"},"PeriodicalIF":3.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139736687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}