Pub Date : 2023-06-01Epub Date: 2023-03-19DOI: 10.1177/01466216231165314
Chia-Wen Chen, Chen-Wei Liu
Student evaluation of teaching (SET) assesses students' experiences in a class to evaluate teachers' performance in class. SET essentially comprises three facets: teaching proficiency, student rating harshness, and item properties. The computerized adaptive testing form of SET with an established item pool has been used in educational environments. However, conventional scoring methods ignore the harshness of students toward teachers and, therefore, are unable to provide a valid assessment. In addition, simultaneously estimating teachers' teaching proficiency and students' harshness remains an unaddressed issue in the context of online SET. In the current study, we develop and compare three novel methods-marginal, iterative once, and hybrid approaches-to improve the precision of parameter estimations. A simulation study is conducted to demonstrate that the hybrid method is a promising technique that can substantially outperform traditional methods.
学生教学评价(SET)通过评估学生在课堂上的体验来评价教师在课堂上的表现。SET 主要包括三个方面:教学能力、学生评分的苛刻程度和项目属性。SET 的计算机自适应测试形式已在教育环境中使用,并建立了项目库。然而,传统的评分方法忽略了学生对教师的苛刻程度,因此无法提供有效的评估。此外,在在线 SET 中,同时估计教师的教学水平和学生的苛刻程度仍是一个尚未解决的问题。在本研究中,我们开发并比较了三种新方法--边际法、迭代一次法和混合法,以提高参数估计的精度。我们进行了一项模拟研究,证明混合方法是一种很有前途的技术,可以大大优于传统方法。
{"title":"Online Parameter Estimation for Student Evaluation of Teaching.","authors":"Chia-Wen Chen, Chen-Wei Liu","doi":"10.1177/01466216231165314","DOIUrl":"10.1177/01466216231165314","url":null,"abstract":"<p><p>Student evaluation of teaching (SET) assesses students' experiences in a class to evaluate teachers' performance in class. SET essentially comprises three facets: teaching proficiency, student rating harshness, and item properties. The computerized adaptive testing form of SET with an established item pool has been used in educational environments. However, conventional scoring methods ignore the harshness of students toward teachers and, therefore, are unable to provide a valid assessment. In addition, simultaneously estimating teachers' teaching proficiency and students' harshness remains an unaddressed issue in the context of online SET. In the current study, we develop and compare three novel methods-marginal, iterative once, and hybrid approaches-to improve the precision of parameter estimations. A simulation study is conducted to demonstrate that the hybrid method is a promising technique that can substantially outperform traditional methods.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"47 4","pages":"291-311"},"PeriodicalIF":1.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10240567/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10300642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01Epub Date: 2023-05-13DOI: 10.1177/01466216231174559
Xiaojian Sun, Shimeng Wang, Lei Guo, Tao Xin, Naiqing Song
Items with the presence of differential item functioning (DIF) will compromise the validity and fairness of a test. Studies have investigated the DIF effect in the context of cognitive diagnostic assessment (CDA), and some DIF detection methods have been proposed. Most of these methods are mainly designed to perform the presence of DIF between two groups; however, empirical situations may contain more than two groups. To date, only a handful of studies have detected the DIF effect with multiple groups in the CDA context. This study uses the generalized logistic regression (GLR) method to detect DIF items by using the estimated attribute profile as matching criteria. A simulation study is conducted to examine the performance of the two GLR methods, GLR-based Wald test (GLR-Wald) and GLR-based likelihood ratio test (GLR-LRT), in detecting the DIF items, the results based on the ordinary Wald test are also reported. Results show that (1) both GLR-Wald and GLR-LRT have more reasonable performance in controlling Type I error rates than the ordinary Wald test in most conditions; (2) the GLR method also produces higher empirical rejection rates than the ordinary Wald test in most conditions; and (3) using the estimated attribute profile as the matching criteria can produce similar Type I error rates and empirical rejection rates for GLR-Wald and GLR-LRT. A real data example is also analyzed to illustrate the application of these DIF detection methods in multiple groups.
{"title":"Using a Generalized Logistic Regression Method to Detect Differential Item Functioning With Multiple Groups in Cognitive Diagnostic Tests.","authors":"Xiaojian Sun, Shimeng Wang, Lei Guo, Tao Xin, Naiqing Song","doi":"10.1177/01466216231174559","DOIUrl":"10.1177/01466216231174559","url":null,"abstract":"<p><p>Items with the presence of differential item functioning (DIF) will compromise the validity and fairness of a test. Studies have investigated the DIF effect in the context of cognitive diagnostic assessment (CDA), and some DIF detection methods have been proposed. Most of these methods are mainly designed to perform the presence of DIF between two groups; however, empirical situations may contain more than two groups. To date, only a handful of studies have detected the DIF effect with multiple groups in the CDA context. This study uses the generalized logistic regression (GLR) method to detect DIF items by using the estimated attribute profile as matching criteria. A simulation study is conducted to examine the performance of the two GLR methods, GLR-based Wald test (GLR-Wald) and GLR-based likelihood ratio test (GLR-LRT), in detecting the DIF items, the results based on the ordinary Wald test are also reported. Results show that (1) both GLR-Wald and GLR-LRT have more reasonable performance in controlling Type I error rates than the ordinary Wald test in most conditions; (2) the GLR method also produces higher empirical rejection rates than the ordinary Wald test in most conditions; and (3) using the estimated attribute profile as the matching criteria can produce similar Type I error rates and empirical rejection rates for GLR-Wald and GLR-LRT. A real data example is also analyzed to illustrate the application of these DIF detection methods in multiple groups.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"47 4","pages":"328-346"},"PeriodicalIF":1.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10240570/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10300639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01Epub Date: 2023-03-17DOI: 10.1177/01466216231165313
Chen Tian, Jaehwa Choi
Sibling items developed through automatic item generation share similar but not identical psychometric properties. However, considering sibling item variations may bring huge computation difficulties and little improvement on scoring. Assuming identical characteristics among siblings, this study explores the impact of item model parameter variations (i.e., within-family variation between siblings) on person parameter estimation in linear tests and Computerized Adaptive Testing (CAT). Specifically, we explore (1) what if small/medium/large within-family variance is ignored, (2) if the effect of larger within-model variance can be compensated by greater test length, (3) if the item model pool properties affect the impact of within-family variance on scoring, and (4) if the issues in (1) and (2) are different in linear vs. adaptive testing. Related sibling model is used for data generation and identical sibling model is assumed for scoring. Manipulated factors include test length, the size of within-model variation, and item model pool characteristics. Results show that as within-family variance increases, the standard error of scores remains at similar levels. For correlations between true and estimated score and RMSE, the effect of the larger within-model variance was compensated by test length. For bias, scores are biased towards the center, and bias was not compensated by test length. Despite the within-family variation is random in current simulations, to yield less biased ability estimates, the item model pool should provide balanced opportunities such that "fake-easy" and "fake-difficult" item instances cancel their effects. The results of CAT are similar to that of linear tests, except for higher efficiency.
{"title":"The Impact of Item Model Parameter Variations on Person Parameter Estimation in Computerized Adaptive Testing With Automatically Generated Items.","authors":"Chen Tian, Jaehwa Choi","doi":"10.1177/01466216231165313","DOIUrl":"10.1177/01466216231165313","url":null,"abstract":"<p><p>Sibling items developed through automatic item generation share similar but not identical psychometric properties. However, considering sibling item variations may bring huge computation difficulties and little improvement on scoring. Assuming identical characteristics among siblings, this study explores the impact of item model parameter variations (i.e., within-family variation between siblings) on person parameter estimation in linear tests and Computerized Adaptive Testing (CAT). Specifically, we explore (1) what if small/medium/large within-family variance is ignored, (2) if the effect of larger within-model variance can be compensated by greater test length, (3) if the item model pool properties affect the impact of within-family variance on scoring, and (4) if the issues in (1) and (2) are different in linear vs. adaptive testing. Related sibling model is used for data generation and identical sibling model is assumed for scoring. Manipulated factors include test length, the size of within-model variation, and item model pool characteristics. Results show that as within-family variance increases, the standard error of scores remains at similar levels. For correlations between true and estimated score and RMSE, the effect of the larger within-model variance was compensated by test length. For bias, scores are biased towards the center, and bias was not compensated by test length. Despite the within-family variation is random in current simulations, to yield less biased ability estimates, the item model pool should provide balanced opportunities such that \"fake-easy\" and \"fake-difficult\" item instances cancel their effects. The results of CAT are similar to that of linear tests, except for higher efficiency.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"47 4","pages":"275-290"},"PeriodicalIF":1.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10240571/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10300640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01Epub Date: 2023-01-19DOI: 10.1177/01466216231151704
Kuan-Yu Jin, Delroy L Paulhus, Ching-Lin Shih
A variety of approaches have been presented for assessing desirable responding in self-report measures. Among them, the overclaiming technique asks respondents to rate their familiarity with a large set of real and nonexistent items (foils). The application of signal detection formulas to the endorsement rates of real items and foils yields indices of (a) knowledge accuracy and (b) knowledge bias. This overclaiming technique reflects both cognitive ability and personality. Here, we develop an alternative measurement model based on multidimensional item response theory (MIRT). We report three studies demonstrating this new model's capacity to analyze overclaiming data. First, a simulation study illustrates that MIRT and signal detection theory yield comparable indices of accuracy and bias-although MIRT provides important additional information. Two empirical examples-one based on mathematical terms and one based on Chinese idioms-are then elaborated. Together, they demonstrate the utility of this new approach for group comparisons and item selection. The implications of this research are illustrated and discussed.
{"title":"A New Approach to Desirable Responding: Multidimensional Item Response Model of Overclaiming Data.","authors":"Kuan-Yu Jin, Delroy L Paulhus, Ching-Lin Shih","doi":"10.1177/01466216231151704","DOIUrl":"10.1177/01466216231151704","url":null,"abstract":"<p><p>A variety of approaches have been presented for assessing desirable responding in self-report measures. Among them, the overclaiming technique asks respondents to rate their familiarity with a large set of real and nonexistent items (foils). The application of signal detection formulas to the endorsement rates of real items and foils yields indices of (a) <i>knowledge accuracy</i> and (b) <i>knowledge bias</i>. This overclaiming technique reflects both cognitive ability and personality. Here, we develop an alternative measurement model based on multidimensional item response theory (MIRT). We report three studies demonstrating this new model's capacity to analyze overclaiming data. First, a simulation study illustrates that MIRT and signal detection theory yield comparable indices of accuracy and bias-although MIRT provides important additional information. Two empirical examples-one based on mathematical terms and one based on Chinese idioms-are then elaborated. Together, they demonstrate the utility of this new approach for group comparisons and item selection. The implications of this research are illustrated and discussed.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"47 3","pages":"221-236"},"PeriodicalIF":1.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10126390/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9363746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01Epub Date: 2023-03-21DOI: 10.1177/01466216231165315
Wenchao Ma, Chun Wang, Jiaying Xiao
In this article, a testlet hierarchical diagnostic classification model (TH-DCM) was introduced to take both attribute hierarchies and item bundles into account. The expectation-maximization algorithm with an analytic dimension reduction technique was used for parameter estimation. A simulation study was conducted to assess the parameter recovery of the proposed model under varied conditions, and to compare TH-DCM with testlet higher-order CDM (THO-DCM; Hansen, M. (2013). Hierarchical item response models for cognitive diagnosis (Unpublished doctoral dissertation). UCLA; Zhan, P., Li, X., Wang, W.-C., Bian, Y., & Wang, L. (2015). The multidimensional testlet-effect cognitive diagnostic models. Acta Psychologica Sinica, 47(5), 689. https://doi.org/10.3724/SP.J.1041.2015.00689). Results showed that (1) ignoring large testlet effects worsened parameter recovery, (2) DCMs assuming equal testlet effects within each testlet performed as well as the testlet model assuming unequal testlet effects under most conditions, (3) misspecifications in joint attribute distribution had an differential impact on parameter recovery, and (4) THO-DCM seems to be a robust alternative to TH-DCM under some hierarchical structures. A set of real data was also analyzed for illustration.
{"title":"A Testlet Diagnostic Classification Model with Attribute Hierarchies.","authors":"Wenchao Ma, Chun Wang, Jiaying Xiao","doi":"10.1177/01466216231165315","DOIUrl":"10.1177/01466216231165315","url":null,"abstract":"<p><p>In this article, a testlet hierarchical diagnostic classification model (TH-DCM) was introduced to take both attribute hierarchies and item bundles into account. The expectation-maximization algorithm with an analytic dimension reduction technique was used for parameter estimation. A simulation study was conducted to assess the parameter recovery of the proposed model under varied conditions, and to compare TH-DCM with testlet higher-order CDM (THO-DCM; Hansen, M. (2013). Hierarchical item response models for cognitive diagnosis (Unpublished doctoral dissertation). UCLA; Zhan, P., Li, X., Wang, W.-C., Bian, Y., & Wang, L. (2015). The multidimensional testlet-effect cognitive diagnostic models. Acta Psychologica Sinica, 47(5), 689. https://doi.org/10.3724/SP.J.1041.2015.00689). Results showed that (1) ignoring large testlet effects worsened parameter recovery, (2) DCMs assuming equal testlet effects within each testlet performed as well as the testlet model assuming unequal testlet effects under most conditions, (3) misspecifications in joint attribute distribution had an differential impact on parameter recovery, and (4) THO-DCM seems to be a robust alternative to TH-DCM under some hierarchical structures. A set of real data was also analyzed for illustration.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"47 3","pages":"183-199"},"PeriodicalIF":1.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10126385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9357116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01Epub Date: 2023-03-15DOI: 10.1177/01466216231165304
Alice Brawley Newlin
{"title":"On the Folly of Introducing A (Time-Based UMV), While Designing for B (Time-Based CMV).","authors":"Alice Brawley Newlin","doi":"10.1177/01466216231165304","DOIUrl":"10.1177/01466216231165304","url":null,"abstract":"","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"47 3","pages":"253-256"},"PeriodicalIF":1.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10126389/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9363899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article presents a new computerized adaptive testing (CAT) procedure for use with batteries of unidimensional tests. At each step of testing, the estimate of a certain ability is updated on the basis of the response to the latest administered item and the current estimates of all other abilities measured by the battery. The information deriving from these abilities is incorporated into an empirical prior that is updated each time that new estimates of the abilities are computed. In two simulation studies, the performance of the proposed procedure is compared with that of a standard procedure for CAT with batteries of unidimensional tests. The proposed procedure yields more accurate ability estimates in fixed-length CATs, and a reduction of test length in variable-length CATs. These gains in accuracy and efficiency increase with the correlation between the abilities measured by the batteries.
{"title":"Enhancing Computerized Adaptive Testing with Batteries of Unidimensional Tests.","authors":"Pasquale Anselmi, Egidio Robusto, Francesca Cristante","doi":"10.1177/01466216231165301","DOIUrl":"10.1177/01466216231165301","url":null,"abstract":"<p><p>The article presents a new computerized adaptive testing (CAT) procedure for use with batteries of unidimensional tests. At each step of testing, the estimate of a certain ability is updated on the basis of the response to the latest administered item and the current estimates of all other abilities measured by the battery. The information deriving from these abilities is incorporated into an empirical prior that is updated each time that new estimates of the abilities are computed. In two simulation studies, the performance of the proposed procedure is compared with that of a standard procedure for CAT with batteries of unidimensional tests. The proposed procedure yields more accurate ability estimates in fixed-length CATs, and a reduction of test length in variable-length CATs. These gains in accuracy and efficiency increase with the correlation between the abilities measured by the batteries.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"47 3","pages":"167-182"},"PeriodicalIF":1.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10126386/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9357115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01Epub Date: 2023-03-20DOI: 10.1177/01466216231165299
Yongze Xu, Ying Cui, Xinyi Wang, Meiwei Huang, Fang Luo
Test collusion (TC) is a form of cheating in which, examinees operate in groups to alter normal item responses. TC is becoming increasingly common, especially within high-stakes, large-scale examinations. However, research on TC detection methods remains scarce. The present article proposes a new algorithm for TC detection, inspired by variable selection within high-dimensional statistical analysis. The algorithm relies only on item responses and supports different response similarity indices. Simulation and practical studies were conducted to (1) compare the performance of the new algorithm against the recently developed clique detector approach, and (2) verify the performance of the new algorithm in a large-scale test setting.
{"title":"Confidence Screening Detector: A New Method for Detecting Test Collusion.","authors":"Yongze Xu, Ying Cui, Xinyi Wang, Meiwei Huang, Fang Luo","doi":"10.1177/01466216231165299","DOIUrl":"10.1177/01466216231165299","url":null,"abstract":"<p><p>Test collusion (TC) is a form of cheating in which, examinees operate in groups to alter normal item responses. TC is becoming increasingly common, especially within high-stakes, large-scale examinations. However, research on TC detection methods remains scarce. The present article proposes a new algorithm for TC detection, inspired by variable selection within high-dimensional statistical analysis. The algorithm relies only on item responses and supports different response similarity indices. Simulation and practical studies were conducted to (1) compare the performance of the new algorithm against the recently developed clique detector approach, and (2) verify the performance of the new algorithm in a large-scale test setting.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"47 3","pages":"237-252"},"PeriodicalIF":1.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10126388/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9363896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01Epub Date: 2023-01-24DOI: 10.1177/01466216231151702
Michela Battauz, Waldir Leôncio
Test equating is a statistical procedure to make scores from different test forms comparable and interchangeable. Focusing on an IRT approach, this paper proposes a novel method that simultaneously links the item parameter estimates of a large number of test forms. Our proposal differentiates itself from the current state of the art by using likelihood-based methods and by taking into account the heteroskedasticity and the correlation of the item parameter estimates of each form. Simulation studies show that our proposal yields equating coefficient estimates which are more efficient than what is currently available in the literature.
{"title":"A Likelihood Approach to Item Response Theory Equating of Multiple Forms.","authors":"Michela Battauz, Waldir Leôncio","doi":"10.1177/01466216231151702","DOIUrl":"10.1177/01466216231151702","url":null,"abstract":"<p><p>Test equating is a statistical procedure to make scores from different test forms comparable and interchangeable. Focusing on an IRT approach, this paper proposes a novel method that simultaneously links the item parameter estimates of a large number of test forms. Our proposal differentiates itself from the current state of the art by using likelihood-based methods and by taking into account the heteroskedasticity and the correlation of the item parameter estimates of each form. Simulation studies show that our proposal yields equating coefficient estimates which are more efficient than what is currently available in the literature.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"47 3","pages":"200-220"},"PeriodicalIF":1.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10126387/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9357110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01Epub Date: 2023-01-14DOI: 10.1177/01466216231151700
W Holmes Finch, Brian F French, Alicia Hazelwood
Social science research is heavily dependent on the use of standardized assessments of a variety of phenomena, such as mood, executive functioning, and cognitive ability. An important assumption when using these instruments is that they perform similarly for all members of the population. When this assumption is violated, the validity evidence of the scores is called into question. The standard approach for assessing the factorial invariance of the measures across subgroups within the population involves multiple groups confirmatory factor analysis (MGCFA). CFA models typically, but not always, assume that once the latent structure of the model is accounted for, the residual terms for the observed indicators are uncorrelated (local independence). Commonly, correlated residuals are introduced after a baseline model shows inadequate fit and inspection of modification indices ensues to remedy fit. An alternative procedure for fitting latent variable models that may be useful when local independence does not hold is based on network models. In particular, the residual network model (RNM) offers promise with respect to fitting latent variable models in the absence of local independence via an alternative search procedure. This simulation study compared the performances of MGCFA and RNM for measurement invariance assessment when local independence is violated, and residual covariances are themselves not invariant. Results revealed that RNM had better Type I error control and higher power compared to MGCFA when local independence was absent. Implications of the results for statistical practice are discussed.
{"title":"A Comparison of Confirmatory Factor Analysis and Network Models for Measurement Invariance Assessment When Indicator Residuals are Correlated.","authors":"W Holmes Finch, Brian F French, Alicia Hazelwood","doi":"10.1177/01466216231151700","DOIUrl":"10.1177/01466216231151700","url":null,"abstract":"<p><p>Social science research is heavily dependent on the use of standardized assessments of a variety of phenomena, such as mood, executive functioning, and cognitive ability. An important assumption when using these instruments is that they perform similarly for all members of the population. When this assumption is violated, the validity evidence of the scores is called into question. The standard approach for assessing the factorial invariance of the measures across subgroups within the population involves multiple groups confirmatory factor analysis (MGCFA). CFA models typically, but not always, assume that once the latent structure of the model is accounted for, the residual terms for the observed indicators are uncorrelated (local independence). Commonly, correlated residuals are introduced after a baseline model shows inadequate fit and inspection of modification indices ensues to remedy fit. An alternative procedure for fitting latent variable models that may be useful when local independence does not hold is based on network models. In particular, the residual network model (RNM) offers promise with respect to fitting latent variable models in the absence of local independence via an alternative search procedure. This simulation study compared the performances of MGCFA and RNM for measurement invariance assessment when local independence is violated, and residual covariances are themselves not invariant. Results revealed that RNM had better Type I error control and higher power compared to MGCFA when local independence was absent. Implications of the results for statistical practice are discussed.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"47 2","pages":"106-122"},"PeriodicalIF":1.2,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9979199/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10845586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}