Christoph Kiefer, Marcella L Woud, Simon E Blackwell, Axel Mayer
When evaluating the effect of psychological treatments on a dichotomous outcome variable in a randomized controlled trial (RCT), covariate adjustment using logistic regression models is often applied. In the presence of covariates, average marginal effects (AMEs) are often preferred over odds ratios, as AMEs yield a clearer substantive and causal interpretation. However, standard error computation of AMEs neglects sampling-based uncertainty (i.e., covariate values are assumed to be fixed over repeated sampling), which leads to underestimation of AME standard errors in other generalized linear models (e.g., Poisson regression). In this paper, we present and compare approaches allowing for stochastic (i.e., randomly sampled) covariates in models for binary outcomes. In a simulation study, we investigated the quality of the AME and stochastic-covariate approaches focusing on statistical inference in finite samples. Our results indicate that the fixed-covariate approach provides reliable results only if there is no heterogeneity in interindividual treatment effects (i.e., presence of treatment-covariate interactions), while the stochastic-covariate approaches are preferable in all other simulated conditions. We provide an illustrative example from clinical psychology investigating the effect of a cognitive bias modification training on post-traumatic stress disorder while accounting for patients' anxiety using an RCT.
{"title":"Average treatment effects on binary outcomes with stochastic covariates.","authors":"Christoph Kiefer, Marcella L Woud, Simon E Blackwell, Axel Mayer","doi":"10.1111/bmsp.12355","DOIUrl":"https://doi.org/10.1111/bmsp.12355","url":null,"abstract":"<p><p>When evaluating the effect of psychological treatments on a dichotomous outcome variable in a randomized controlled trial (RCT), covariate adjustment using logistic regression models is often applied. In the presence of covariates, average marginal effects (AMEs) are often preferred over odds ratios, as AMEs yield a clearer substantive and causal interpretation. However, standard error computation of AMEs neglects sampling-based uncertainty (i.e., covariate values are assumed to be fixed over repeated sampling), which leads to underestimation of AME standard errors in other generalized linear models (e.g., Poisson regression). In this paper, we present and compare approaches allowing for stochastic (i.e., randomly sampled) covariates in models for binary outcomes. In a simulation study, we investigated the quality of the AME and stochastic-covariate approaches focusing on statistical inference in finite samples. Our results indicate that the fixed-covariate approach provides reliable results only if there is no heterogeneity in interindividual treatment effects (i.e., presence of treatment-covariate interactions), while the stochastic-covariate approaches are preferable in all other simulated conditions. We provide an illustrative example from clinical psychology investigating the effect of a cognitive bias modification training on post-traumatic stress disorder while accounting for patients' anxiety using an RCT.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The analysis of multiple bivariate correlations is often carried out by conducting simple tests to check whether each of them is significantly different from zero. In addition, pairwise differences are often judged by eye or by comparing the p-values of the individual tests of significance despite the existence of statistical tests for differences between correlations. This paper uses simulation methods to assess the accuracy (empirical Type I error rate), power, and robustness of 10 tests designed to check the significance of the difference between two dependent correlations with overlapping variables (i.e., the correlation between X1 and Y and the correlation between X2 and Y). Five of the tests turned out to be inadvisable because their empirical Type I error rates under normality differ greatly from the nominal alpha level of .05 either across the board or within certain sub-ranges of the parameter space. The remaining five tests were acceptable and their merits were similar in terms of all comparison criteria, although none of them was robust across all forms of non-normality explored in the study. Practical recommendations are given for the choice of a statistical test to compare dependent correlations with overlapping variables.
在分析多个二元相关性时,通常会进行简单的检验,检查每个相关性是否与零有显著 差异。此外,尽管存在相关性之间差异的统计检验,但通常通过眼睛或比较单个显著性检验的 p 值来判断成对差异。本文使用模拟方法评估了 10 个检验的准确性(经验 I 类错误率)、有效性和稳健性,这些检验旨在检查两个变量重叠的因变量相关性(即 X1 和 Y 之间的相关性以及 X2 和 Y 之间的相关性)之间差异的显著性。其中五个测试结果是不可取的,因为它们在正态性下的经验 I 类误差率与 0.05 的名义α水平相差很大,要么是全面相差,要么是在参数空间的某些子范围内相差很大。其余五种检验是可以接受的,它们在所有比较标准方面的优点相似,但没有一种检验在本研究探讨的所有非正态性形式中都是稳健的。本文就如何选择统计检验来比较具有重叠变量的因果相关性提出了实用建议。
{"title":"Are alternative variables in a set differently associated with a target variable? Statistical tests and practical advice for dealing with dependent correlations.","authors":"Miguel A García-Pérez","doi":"10.1111/bmsp.12354","DOIUrl":"https://doi.org/10.1111/bmsp.12354","url":null,"abstract":"<p><p>The analysis of multiple bivariate correlations is often carried out by conducting simple tests to check whether each of them is significantly different from zero. In addition, pairwise differences are often judged by eye or by comparing the p-values of the individual tests of significance despite the existence of statistical tests for differences between correlations. This paper uses simulation methods to assess the accuracy (empirical Type I error rate), power, and robustness of 10 tests designed to check the significance of the difference between two dependent correlations with overlapping variables (i.e., the correlation between X<sub>1</sub> and Y and the correlation between X<sub>2</sub> and Y). Five of the tests turned out to be inadvisable because their empirical Type I error rates under normality differ greatly from the nominal alpha level of .05 either across the board or within certain sub-ranges of the parameter space. The remaining five tests were acceptable and their merits were similar in terms of all comparison criteria, although none of them was robust across all forms of non-normality explored in the study. Practical recommendations are given for the choice of a statistical test to compare dependent correlations with overlapping variables.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141460882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exploratory cognitive diagnosis models have been widely used in psychology, education and other fields. This paper focuses on determining the number of attributes in a widely used cognitive diagnosis model, the GDINA model. Under some conditions of cognitive diagnosis models, we prove that there exists a special structure for the covariance matrix of observed data. Due to the special structure of the covariance matrix, an estimator based on eigen-decomposition is proposed for the number of attributes for the GDINA model. The performance of the proposed estimator is verified by simulation studies. Finally, the proposed estimator is applied to two real data sets Examination for the Certificate of Proficiency in English (ECPE) and Big Five Personality (BFP).
{"title":"Determining the number of attributes in the GDINA model.","authors":"Juntao Wang, Jiangtao Duan","doi":"10.1111/bmsp.12349","DOIUrl":"https://doi.org/10.1111/bmsp.12349","url":null,"abstract":"<p><p>Exploratory cognitive diagnosis models have been widely used in psychology, education and other fields. This paper focuses on determining the number of attributes in a widely used cognitive diagnosis model, the GDINA model. Under some conditions of cognitive diagnosis models, we prove that there exists a special structure for the covariance matrix of observed data. Due to the special structure of the covariance matrix, an estimator based on eigen-decomposition is proposed for the number of attributes for the GDINA model. The performance of the proposed estimator is verified by simulation studies. Finally, the proposed estimator is applied to two real data sets Examination for the Certificate of Proficiency in English (ECPE) and Big Five Personality (BFP).</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141421977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computerized adaptive testing for cognitive diagnosis (CD-CAT) achieves remarkable estimation efficiency and accuracy by adaptively selecting and then administering items tailored to each examinee. The process of item selection stands as a pivotal component of a CD-CAT algorithm, with various methods having been developed for binary responses. However, multiple-choice (MC) items, an important item type that allows for the extraction of richer diagnostic information from incorrect answers, have been underemphasized. Currently, the Jensen-Shannon divergence (JSD) index introduced by Yigit et al. (Applied Psychological Measurement, 2019, 43, 388) is the only item selection method exclusively designed for MC items. However, the JSD index requires a large sample to calibrate item parameters, which may be infeasible when there is only a small or no calibration sample. To bridge this gap, the study first proposes a nonparametric item selection method for MC items (MC-NPS) by implementing novel discrimination power that measures an item's ability to effectively distinguish among different attribute profiles. A Q-optimal procedure for MC items is also developed to improve the classification during the initial phase of a CD-CAT algorithm. The effectiveness and efficiency of the two proposed algorithms were confirmed by simulation studies.
用于认知诊断的计算机化自适应测试(CD-CAT)通过自适应地选择和实施适合每个受试者的项目,实现了显著的估计效率和准确性。项目选择过程是 CD-CAT 算法的关键组成部分,针对二元应答开发了各种方法。然而,多选题(MC)作为一种重要的题目类型,可以从错误答案中提取更丰富的诊断信息,却一直未得到足够重视。目前,Yigit 等人提出的詹森-香农分歧(JSD)指数(《应用心理测量》,2019 年,43 期,388)是唯一一种专为 MC 题项设计的题项选择方法。然而,JSD 指数需要大量样本来校准项目参数,这在只有少量校准样本或没有校准样本的情况下可能是不可行的。为了弥补这一差距,本研究首先提出了一种适用于 MC 项目的非参数项目选择方法(MC-NPS),它采用了新颖的区分度来衡量项目有效区分不同属性特征的能力。此外,还为 MC 项目开发了 Q 最佳程序,以改进 CD-CAT 算法初始阶段的分类。模拟研究证实了这两种拟议算法的有效性和效率。
{"title":"Nonparametric CD-CAT for multiple-choice items: Item selection method and Q-optimality.","authors":"Yu Wang, Chia-Yi Chiu, Hans Friedrich Köhn","doi":"10.1111/bmsp.12350","DOIUrl":"https://doi.org/10.1111/bmsp.12350","url":null,"abstract":"<p><p>Computerized adaptive testing for cognitive diagnosis (CD-CAT) achieves remarkable estimation efficiency and accuracy by adaptively selecting and then administering items tailored to each examinee. The process of item selection stands as a pivotal component of a CD-CAT algorithm, with various methods having been developed for binary responses. However, multiple-choice (MC) items, an important item type that allows for the extraction of richer diagnostic information from incorrect answers, have been underemphasized. Currently, the Jensen-Shannon divergence (JSD) index introduced by Yigit et al. (Applied Psychological Measurement, 2019, 43, 388) is the only item selection method exclusively designed for MC items. However, the JSD index requires a large sample to calibrate item parameters, which may be infeasible when there is only a small or no calibration sample. To bridge this gap, the study first proposes a nonparametric item selection method for MC items (MC-NPS) by implementing novel discrimination power that measures an item's ability to effectively distinguish among different attribute profiles. A Q-optimal procedure for MC items is also developed to improve the classification during the initial phase of a CD-CAT algorithm. The effectiveness and efficiency of the two proposed algorithms were confirmed by simulation studies.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141096972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oral reading fluency (ORF) assessments are commonly used to screen at-risk readers and evaluate interventions' effectiveness as curriculum-based measurements. Similar to the standard practice in item response theory (IRT), calibrated passage parameter estimates are currently used as if they were population values in model-based ORF scoring. However, calibration errors that are unaccounted for may bias ORF score estimates and, in particular, lead to underestimated standard errors (SEs) of ORF scores. Therefore, we consider an approach that incorporates the calibration errors in latent variable scores. We further derive the SEs of ORF scores based on the delta method to incorporate the calibration uncertainty. We conduct a simulation study to evaluate the recovery of point estimates and SEs of latent variable scores and ORF scores in various simulated conditions. Results suggest that ignoring calibration errors leads to underestimated latent variable score SEs and ORF score SEs, especially when the calibration sample is small.
口语阅读流利度(ORF)评估通常用于筛选高危读者和评估干预措施的有效性,是以课程为基础的测量方法。与项目反应理论(IRT)中的标准做法类似,目前在基于模型的口语阅读流利度评分中,校准过的段落参数估计值被当作人口值使用。然而,未考虑的校准误差可能会使 ORF 分数估计值出现偏差,特别是会导致 ORF 分数的标准误差(SE)被低估。因此,我们考虑了一种将校准误差纳入潜在变量得分的方法。我们根据德尔塔法进一步推导 ORF 分数的 SE,以纳入校准的不确定性。我们进行了一项模拟研究,以评估在各种模拟条件下潜在变量得分和 ORF 分数的点估计值和 SE 的恢复情况。结果表明,忽略校准误差会导致低估潜变量得分 SE 和 ORF 分数 SE,尤其是当校准样本较小时。
{"title":"Incorporating calibration errors in oral reading fluency scoring.","authors":"Xin Qiao, Akihito Kamata, Cornelis Potgieter","doi":"10.1111/bmsp.12348","DOIUrl":"https://doi.org/10.1111/bmsp.12348","url":null,"abstract":"<p><p>Oral reading fluency (ORF) assessments are commonly used to screen at-risk readers and evaluate interventions' effectiveness as curriculum-based measurements. Similar to the standard practice in item response theory (IRT), calibrated passage parameter estimates are currently used as if they were population values in model-based ORF scoring. However, calibration errors that are unaccounted for may bias ORF score estimates and, in particular, lead to underestimated standard errors (SEs) of ORF scores. Therefore, we consider an approach that incorporates the calibration errors in latent variable scores. We further derive the SEs of ORF scores based on the delta method to incorporate the calibration uncertainty. We conduct a simulation study to evaluate the recovery of point estimates and SEs of latent variable scores and ORF scores in various simulated conditions. Results suggest that ignoring calibration errors leads to underestimated latent variable score SEs and ORF score SEs, especially when the calibration sample is small.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140900014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shi-Fang Qiu, Jie Lei, Wai-Yin Poon, Man-Lai Tang, Ricky S Wong, Ji-Ran Tao
A sufficient number of participants should be included to adequately address the research interest in the surveys with sensitive questions. In this paper, sample size formulas/iterative algorithms are developed from the perspective of controlling the confidence interval width of the prevalence of a sensitive attribute under four non-randomized response models: the crosswise model, parallel model, Poisson item count technique model and negative binomial item count technique model. In contrast to the conventional approach for sample size determination, our sample size formulas/algorithms explicitly incorporate an assurance probability of controlling the width of a confidence interval within the pre-specified range. The performance of the proposed methods is evaluated with respect to the empirical coverage probability, empirical assurance probability and confidence width. Simulation results show that all formulas/algorithms are effective and hence are recommended for practical applications. A real example is used to illustrate the proposed methods.
{"title":"Sample size determination for interval estimation of the prevalence of a sensitive attribute under non-randomized response models.","authors":"Shi-Fang Qiu, Jie Lei, Wai-Yin Poon, Man-Lai Tang, Ricky S Wong, Ji-Ran Tao","doi":"10.1111/bmsp.12338","DOIUrl":"https://doi.org/10.1111/bmsp.12338","url":null,"abstract":"<p><p>A sufficient number of participants should be included to adequately address the research interest in the surveys with sensitive questions. In this paper, sample size formulas/iterative algorithms are developed from the perspective of controlling the confidence interval width of the prevalence of a sensitive attribute under four non-randomized response models: the crosswise model, parallel model, Poisson item count technique model and negative binomial item count technique model. In contrast to the conventional approach for sample size determination, our sample size formulas/algorithms explicitly incorporate an assurance probability of controlling the width of a confidence interval within the pre-specified range. The performance of the proposed methods is evaluated with respect to the empirical coverage probability, empirical assurance probability and confidence width. Simulation results show that all formulas/algorithms are effective and hence are recommended for practical applications. A real example is used to illustrate the proposed methods.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139974734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Castro-Alvarez, Sandip Sinharay, Laura F Bringmann, Rob R Meijer, Jorge N Tendeiro
Several new models based on item response theory have recently been suggested to analyse intensive longitudinal data. One of these new models is the time-varying dynamic partial credit model (TV-DPCM; Castro-Alvarez et al., Multivariate Behavioral Research, 2023, 1), which is a combination of the partial credit model and the time-varying autoregressive model. The model allows the study of the psychometric properties of the items and the modelling of nonlinear trends at the latent state level. However, there is a severe lack of tools to assess the fit of the TV-DPCM. In this paper, we propose and develop several test statistics and discrepancy measures based on the posterior predictive model checking (PPMC) method (PPMC; Rubin, The Annals of Statistics, 1984, 12, 1151) to assess the fit of the TV-DPCM. Simulated and empirical data are used to study the performance of and illustrate the effectiveness of the PPMC method.
最近,有人提出了几种基于项目反应理论的新模型来分析密集的纵向数据。其中一个新模型是时变动态部分学分模型(TV-DPCM;Castro-Alvarez 等人,《多变量行为研究》,2023 年第 1 期),它是部分学分模型和时变自回归模型的结合。该模型可以研究项目的心理测量特性,并在潜态水平上建立非线性趋势模型。然而,目前严重缺乏评估 TV-DPCM 拟合度的工具。在本文中,我们基于后验预测模型检查(PPMC)方法(PPMC; Rubin, The Annals of Statistics, 1984, 12, 1151)提出并开发了几种测试统计量和差异测量方法,用于评估 TV-DPCM 的拟合度。模拟数据和经验数据用于研究 PPMC 方法的性能并说明其有效性。
{"title":"Assessment of fit of the time-varying dynamic partial credit model using the posterior predictive model checking method.","authors":"Sebastian Castro-Alvarez, Sandip Sinharay, Laura F Bringmann, Rob R Meijer, Jorge N Tendeiro","doi":"10.1111/bmsp.12339","DOIUrl":"https://doi.org/10.1111/bmsp.12339","url":null,"abstract":"<p><p>Several new models based on item response theory have recently been suggested to analyse intensive longitudinal data. One of these new models is the time-varying dynamic partial credit model (TV-DPCM; Castro-Alvarez et al., Multivariate Behavioral Research, 2023, 1), which is a combination of the partial credit model and the time-varying autoregressive model. The model allows the study of the psychometric properties of the items and the modelling of nonlinear trends at the latent state level. However, there is a severe lack of tools to assess the fit of the TV-DPCM. In this paper, we propose and develop several test statistics and discrepancy measures based on the posterior predictive model checking (PPMC) method (PPMC; Rubin, The Annals of Statistics, 1984, 12, 1151) to assess the fit of the TV-DPCM. Simulated and empirical data are used to study the performance of and illustrate the effectiveness of the PPMC method.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139914100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exploratory structural equation modelling (ESEM) is an alternative to the well-known method of confirmatory factor analysis (CFA). ESEM is mainly used to assess the quality of measurement models of common factors but can be efficiently extended to test structural models. However, ESEM may not be the best option in some model specifications, especially when structural models are involved, because the full flexibility of ESEM could result in technical difficulties in model estimation. Thus, set-ESEM was developed to accommodate the balance between full-ESEM and CFA. In the present paper, we show examples where set-ESEM should be used rather than full-ESEM. Rather than relying on a simulation study, we provide two applied examples using real data that are included in the OSF repository. Additionally, we provide the code needed to run set-ESEM in the free R package lavaan to make the paper practical. Set-ESEM structural models outperform their CFA-based counterparts in terms of goodness of fit and realistic factor correlation, and hence path coefficients in the two empirical examples. In several instances, effects that were non-significant (i.e., attenuated) in the CFA-based structural model become larger and significant in the set-ESEM structural model, suggesting that set-ESEM models may generate more accurate model parameters and, hence, lower Type II error rate.
探索性结构方程模型(ESEM)是著名的确证因素分析(CFA)方法的替代方法。ESEM 主要用于评估常见因子测量模型的质量,但也可以有效地扩展到测试结构模型。然而,ESEM 在某些模型规格中可能不是最佳选择,尤其是涉及结构模型时,因为 ESEM 的充分灵活性可能会导致模型估计中的技术困难。因此,为了兼顾完全 ESEM 和 CFA,我们开发了集合 ESEM。在本文中,我们将举例说明在哪些情况下应使用集合-ESEM,而不是完全-ESEM。我们没有依赖模拟研究,而是使用 OSF 存储库中的真实数据提供了两个应用实例。此外,我们还在免费的 R 软件包 lavaan 中提供了运行 Set-ESEM 所需的代码,从而使本文更加实用。在拟合优度和现实因子相关性方面,集合-ESEM 结构模型优于基于 CFA 的结构模型,因此在两个实证例子中的路径系数也优于基于 CFA 的结构模型。有几次,在基于 CFA 的结构模型中不显著(即衰减)的效应在集合-ESEM 结构模型中变得更大和显著,这表明集合-ESEM 模型可能会生成更准确的模型参数,从而降低 II 类错误率。
{"title":"When and how to use set-exploratory structural equation modelling to test structural models: A tutorial using the R package lavaan.","authors":"Herb Marsh, Abdullah Alamer","doi":"10.1111/bmsp.12336","DOIUrl":"https://doi.org/10.1111/bmsp.12336","url":null,"abstract":"<p><p>Exploratory structural equation modelling (ESEM) is an alternative to the well-known method of confirmatory factor analysis (CFA). ESEM is mainly used to assess the quality of measurement models of common factors but can be efficiently extended to test structural models. However, ESEM may not be the best option in some model specifications, especially when structural models are involved, because the full flexibility of ESEM could result in technical difficulties in model estimation. Thus, set-ESEM was developed to accommodate the balance between full-ESEM and CFA. In the present paper, we show examples where set-ESEM should be used rather than full-ESEM. Rather than relying on a simulation study, we provide two applied examples using real data that are included in the OSF repository. Additionally, we provide the code needed to run set-ESEM in the free R package lavaan to make the paper practical. Set-ESEM structural models outperform their CFA-based counterparts in terms of goodness of fit and realistic factor correlation, and hence path coefficients in the two empirical examples. In several instances, effects that were non-significant (i.e., attenuated) in the CFA-based structural model become larger and significant in the set-ESEM structural model, suggesting that set-ESEM models may generate more accurate model parameters and, hence, lower Type II error rate.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139742778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Different data types often occur in psychological and educational measurement such as computer-based assessments that record performance and process data (e.g., response times and the number of actions). Modelling such data requires specific models for each data type and accommodating complex dependencies between multiple variables. Generalized linear latent variable models are suitable for modelling mixed data simultaneously, but estimation can be computationally demanding. A fast solution is to use Laplace approximations, but existing implementations of joint modelling of mixed data types are limited to ordinal and continuous data. To address this limitation, we derive an efficient estimation method that uses first- or second-order Laplace approximations to simultaneously model ordinal data, continuous data, and count data. We illustrate the approach with an example and conduct simulations to evaluate the performance of the method in terms of estimation efficiency, convergence, and parameter recovery. The results suggest that the second-order Laplace approximation achieves a higher convergence rate and produces accurate yet fast parameter estimates compared to the first-order Laplace approximation, while the time cost increases with higher model complexity. Additionally, models that consider the dependence of variables from the same stimulus fit the empirical data substantially better than models that disregarded the dependence.
{"title":"Fast estimation of generalized linear latent variable models for performance and process data with ordinal, continuous, and count observed variables.","authors":"Maoxin Zhang, Björn Andersson, Shaobo Jin","doi":"10.1111/bmsp.12337","DOIUrl":"https://doi.org/10.1111/bmsp.12337","url":null,"abstract":"<p><p>Different data types often occur in psychological and educational measurement such as computer-based assessments that record performance and process data (e.g., response times and the number of actions). Modelling such data requires specific models for each data type and accommodating complex dependencies between multiple variables. Generalized linear latent variable models are suitable for modelling mixed data simultaneously, but estimation can be computationally demanding. A fast solution is to use Laplace approximations, but existing implementations of joint modelling of mixed data types are limited to ordinal and continuous data. To address this limitation, we derive an efficient estimation method that uses first- or second-order Laplace approximations to simultaneously model ordinal data, continuous data, and count data. We illustrate the approach with an example and conduct simulations to evaluate the performance of the method in terms of estimation efficiency, convergence, and parameter recovery. The results suggest that the second-order Laplace approximation achieves a higher convergence rate and produces accurate yet fast parameter estimates compared to the first-order Laplace approximation, while the time cost increases with higher model complexity. Additionally, models that consider the dependence of variables from the same stimulus fit the empirical data substantially better than models that disregarded the dependence.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139725087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Crossed random effects models (CREMs) are particularly useful in longitudinal data applications because they allow researchers to account for the impact of dynamic group membership on individual outcomes. However, no research has determined what data conditions need to be met to sufficiently identify these models, especially the group effects, in a longitudinal context. This is a significant gap in the current literature as future applications to real data may need to consider these conditions to yield accurate and precise model parameter estimates, specifically for the group effects on individual outcomes. Furthermore, there are no existing CREMs that can model intrinsically nonlinear growth. The goals of this study are to develop a Bayesian piecewise CREM to model intrinsically nonlinear growth and evaluate what data conditions are necessary to empirically identify both intrinsically linear and nonlinear longitudinal CREMs. This study includes an applied example that utilizes the piecewise CREM with real data and three simulation studies to assess the data conditions necessary to estimate linear, quadratic, and piecewise CREMs. Results show that the number of repeated measurements collected on groups impacts the ability to recover the group effects. Additionally, functional form complexity impacted data collection requirements for estimating longitudinal CREMs.
{"title":"Identifiability and estimability of Bayesian linear and nonlinear crossed random effects models","authors":"Corissa T. Rohloff, Nidhi Kohli, Eric F. Lock","doi":"10.1111/bmsp.12334","DOIUrl":"10.1111/bmsp.12334","url":null,"abstract":"<p>Crossed random effects models (CREMs) are particularly useful in longitudinal data applications because they allow researchers to account for the impact of dynamic group membership on individual outcomes. However, no research has determined what data conditions need to be met to sufficiently identify these models, especially the group effects, in a longitudinal context. This is a significant gap in the current literature as future applications to real data may need to consider these conditions to yield accurate and precise model parameter estimates, specifically for the group effects on individual outcomes. Furthermore, there are no existing CREMs that can model intrinsically nonlinear growth. The goals of this study are to develop a Bayesian piecewise CREM to model intrinsically nonlinear growth and evaluate what data conditions are necessary to empirically identify both intrinsically linear and nonlinear longitudinal CREMs. This study includes an applied example that utilizes the piecewise CREM with real data and three simulation studies to assess the data conditions necessary to estimate linear, quadratic, and piecewise CREMs. Results show that the number of repeated measurements collected on groups impacts the ability to recover the group effects. Additionally, functional form complexity impacted data collection requirements for estimating longitudinal CREMs.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bmsp.12334","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139543995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}