{"title":"Evaluating statistical fit of confirmatory bifactor models: Updated recommendations and a review of current practice.","authors":"Sijia Li, Victoria Savalei","doi":"10.1037/met0000730","DOIUrl":null,"url":null,"abstract":"<p><p>Confirmatory bifactor models have become very popular in psychological applications, but they are increasingly criticized for statistical pitfalls such as tendency to overfit, tendency to produce anomalous results, instability of solutions, and underidentification problems. In part to combat this state of affairs, many different reliability and dimensionality measures have been proposed to help researchers evaluate the quality of the obtained bifactor solution. However, in empirical practice, the evaluation of bifactor models is largely based on structural equation model fit indices. Other critical indicators of solution quality, such as patterns of general and group factor loadings, whether all estimates are interpretable, and values of reliability coefficients, are often not taken into account. In addition, in the methodological literature, some confusion exists about the appropriate interpretation and application of some bifactor reliability coefficients. In this article, we accomplish several goals. First, we review reliability coefficients for bifactor models and their correct interpretations, and we provide expectations for their values. Second, to help steer researchers away from structural equation model fit indices and to improve current practice, we provide a checklist for evaluating the statistical fit of bifactor models. Third, we evaluate the state of current practice by examining 96 empirical articles employing confirmatory bifactor models across different areas of psychology. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/met0000730","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Confirmatory bifactor models have become very popular in psychological applications, but they are increasingly criticized for statistical pitfalls such as tendency to overfit, tendency to produce anomalous results, instability of solutions, and underidentification problems. In part to combat this state of affairs, many different reliability and dimensionality measures have been proposed to help researchers evaluate the quality of the obtained bifactor solution. However, in empirical practice, the evaluation of bifactor models is largely based on structural equation model fit indices. Other critical indicators of solution quality, such as patterns of general and group factor loadings, whether all estimates are interpretable, and values of reliability coefficients, are often not taken into account. In addition, in the methodological literature, some confusion exists about the appropriate interpretation and application of some bifactor reliability coefficients. In this article, we accomplish several goals. First, we review reliability coefficients for bifactor models and their correct interpretations, and we provide expectations for their values. Second, to help steer researchers away from structural equation model fit indices and to improve current practice, we provide a checklist for evaluating the statistical fit of bifactor models. Third, we evaluate the state of current practice by examining 96 empirical articles employing confirmatory bifactor models across different areas of psychology. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
期刊介绍:
Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues. The audience is expected to be diverse and to include those who develop new procedures, those who are responsible for undergraduate and graduate training in design, measurement, and statistics, as well as those who employ those procedures in research.