An Empirical Identification Issue of the Bifactor Item Response Theory Model.

IF 1 4区 心理学 Q4 PSYCHOLOGY, MATHEMATICAL Applied Psychological Measurement Pub Date : 2022-11-01 Epub Date: 2022-07-10 DOI:10.1177/01466216221108133
Wenya Chen, Ken A Fujimoto
{"title":"An Empirical Identification Issue of the Bifactor Item Response Theory Model.","authors":"Wenya Chen,&nbsp;Ken A Fujimoto","doi":"10.1177/01466216221108133","DOIUrl":null,"url":null,"abstract":"<p><p>Using the bifactor item response theory model to analyze data arising from educational and psychological studies has gained popularity over the years. Unfortunately, using this model in practice comes with challenges. One such challenge is an empirical identification issue that is seldom discussed in the literature, and its impact on the estimates of the bifactor model's parameters has not been demonstrated. This issue occurs when an item's discriminations on the general and specific dimensions are approximately equal (i.e., the within-item discriminations are similar in strength), leading to difficulties in obtaining unique estimates for those discriminations. We conducted three simulation studies to demonstrate that within-item discriminations being similar in strength creates problems in estimation stability. The results suggest that a large sample could alleviate but not resolve the problems, at least when considering sample sizes up to 4,000. When the discriminations within items were made clearly different, the estimates of these discriminations were more consistent across the data replicates than that observed when the discriminations within the items were similar. The results also show that the similarity of an item's discriminatory magnitudes on different dimensions has direct implications on the sample size needed in order to consistently obtain accurate parameter estimates. Although our goal was to provide evidence of the empirical identification issue, the study further reveals that the extent of similarity of within-item discriminations, the magnitude of discriminations, and how well the items are targeted to the respondents also play factors in the estimation of the bifactor model's parameters.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9574084/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Psychological Measurement","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/01466216221108133","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/7/10 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"PSYCHOLOGY, MATHEMATICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Using the bifactor item response theory model to analyze data arising from educational and psychological studies has gained popularity over the years. Unfortunately, using this model in practice comes with challenges. One such challenge is an empirical identification issue that is seldom discussed in the literature, and its impact on the estimates of the bifactor model's parameters has not been demonstrated. This issue occurs when an item's discriminations on the general and specific dimensions are approximately equal (i.e., the within-item discriminations are similar in strength), leading to difficulties in obtaining unique estimates for those discriminations. We conducted three simulation studies to demonstrate that within-item discriminations being similar in strength creates problems in estimation stability. The results suggest that a large sample could alleviate but not resolve the problems, at least when considering sample sizes up to 4,000. When the discriminations within items were made clearly different, the estimates of these discriminations were more consistent across the data replicates than that observed when the discriminations within the items were similar. The results also show that the similarity of an item's discriminatory magnitudes on different dimensions has direct implications on the sample size needed in order to consistently obtain accurate parameter estimates. Although our goal was to provide evidence of the empirical identification issue, the study further reveals that the extent of similarity of within-item discriminations, the magnitude of discriminations, and how well the items are targeted to the respondents also play factors in the estimation of the bifactor model's parameters.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
双因子项目反应理论模型的实证识别问题。
使用双因素项目反应理论模型来分析教育和心理研究中产生的数据近年来越来越受欢迎。不幸的是,在实践中使用这种模式会带来挑战。一个这样的挑战是文献中很少讨论的经验识别问题,它对双因子模型参数估计的影响也没有得到证明。当一个项目在一般维度和特定维度上的歧视大致相等时(即,项目内的歧视强度相似),就会出现这种问题,导致难以获得这些歧视的唯一估计。我们进行了三项模拟研究,以证明项目内的歧视在强度上相似会在估计稳定性方面产生问题。结果表明,大样本可以缓解但不能解决这些问题,至少在考虑4000个样本的情况下是这样。当项目内的辨别力明显不同时,这些辨别力的估计值在数据复制中比项目内辨别力相似时观察到的更一致。结果还表明,一个项目在不同维度上的判别幅度的相似性对一致获得准确参数估计所需的样本量有直接影响。尽管我们的目标是提供经验识别问题的证据,但该研究进一步表明,项目内歧视的相似程度、歧视的程度以及项目对受访者的针对性也在双因子模型参数的估计中发挥了作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.30
自引率
8.30%
发文量
50
期刊介绍: Applied Psychological Measurement publishes empirical research on the application of techniques of psychological measurement to substantive problems in all areas of psychology and related disciplines.
期刊最新文献
Effect of Differential Item Functioning on Computer Adaptive Testing Under Different Conditions. Evaluating the Construct Validity of Instructional Manipulation Checks as Measures of Careless Responding to Surveys. A Mark-Recapture Approach to Estimating Item Pool Compromise. Estimating Test-Retest Reliability in the Presence of Self-Selection Bias and Learning/Practice Effects. The Improved EMS Algorithm for Latent Variable Selection in M3PL Model.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1