Victoria Savalei, Jordan C Brace, Rachel T Fouladi
{"title":"We need to change how we compute RMSEA for nested model comparisons in structural equation modeling.","authors":"Victoria Savalei, Jordan C Brace, Rachel T Fouladi","doi":"10.1037/met0000537","DOIUrl":null,"url":null,"abstract":"<p><p>Comparison of nested models is common in applications of structural equation modeling (SEM). When two models are nested, model comparison can be done via a chi-square difference test or by comparing indices of approximate fit. The advantage of fit indices is that they permit some amount of misspecification in the additional constraints imposed on the model, which is a more realistic scenario. The most popular index of approximate fit is the root mean square error of approximation (RMSEA). In this article, we argue that the dominant way of comparing RMSEA values for two nested models, which is simply taking their difference, is problematic and will often mask misfit, particularly in model comparisons with large initial degrees of freedom. We instead advocate computing the RMSEA associated with the chi-square difference test, which we call RMSEA<sub>D</sub>. We are not the first to propose this index, and we review numerous methodological articles that have suggested it. Nonetheless, these articles appear to have had little impact on actual practice. The modification of current practice that we call for may be particularly needed in the context of measurement invariance assessment. We illustrate the difference between the current approach and our advocated approach on three examples, where two involve multiple-group and longitudinal measurement invariance assessment and the third involves comparisons of models with different numbers of factors. We conclude with a discussion of recommendations and future research directions. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"480-493"},"PeriodicalIF":7.6000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/met0000537","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/9 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Comparison of nested models is common in applications of structural equation modeling (SEM). When two models are nested, model comparison can be done via a chi-square difference test or by comparing indices of approximate fit. The advantage of fit indices is that they permit some amount of misspecification in the additional constraints imposed on the model, which is a more realistic scenario. The most popular index of approximate fit is the root mean square error of approximation (RMSEA). In this article, we argue that the dominant way of comparing RMSEA values for two nested models, which is simply taking their difference, is problematic and will often mask misfit, particularly in model comparisons with large initial degrees of freedom. We instead advocate computing the RMSEA associated with the chi-square difference test, which we call RMSEAD. We are not the first to propose this index, and we review numerous methodological articles that have suggested it. Nonetheless, these articles appear to have had little impact on actual practice. The modification of current practice that we call for may be particularly needed in the context of measurement invariance assessment. We illustrate the difference between the current approach and our advocated approach on three examples, where two involve multiple-group and longitudinal measurement invariance assessment and the third involves comparisons of models with different numbers of factors. We conclude with a discussion of recommendations and future research directions. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
期刊介绍:
Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues. The audience is expected to be diverse and to include those who develop new procedures, those who are responsible for undergraduate and graduate training in design, measurement, and statistics, as well as those who employ those procedures in research.