Effects of Using Double Ratings as Item Scores on IRT Proficiency Estimation

IF 1.1 4区 教育学 Q3 EDUCATION & EDUCATIONAL RESEARCH Applied Measurement in Education Pub Date : 2022-04-03 DOI:10.1080/08957347.2022.2067543
Yoon Ah Song, Won‐Chan Lee
{"title":"Effects of Using Double Ratings as Item Scores on IRT Proficiency Estimation","authors":"Yoon Ah Song, Won‐Chan Lee","doi":"10.1080/08957347.2022.2067543","DOIUrl":null,"url":null,"abstract":"ABSTRACT This article presents the performance of item response theory (IRT) models when double ratings are used as item scores over single ratings when rater effects are present. Study 1 examined the influence of the number of ratings on the accuracy of proficiency estimation in the generalized partial credit model (GPCM). Study 2 compared the accuracy of proficiency estimation of two IRT models (GPCM versus the hierarchical rater model, HRM) for double ratings. The main findings were as follows: (a) rater effects substantially reduced the accuracy of IRT proficiency estimation; (b) double ratings relieved the negative impact of rater effects on proficiency estimation and improved the accuracy relative to single ratings; (c) IRT estimators showed different patterns in the conditional accuracy; (d) as more items and a larger number of score categories were used, the accuracy of proficiency estimation improved; and (e) the HRM consistently showed better performance than the GPCM.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"35 1","pages":"95 - 115"},"PeriodicalIF":1.1000,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Measurement in Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/08957347.2022.2067543","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

ABSTRACT This article presents the performance of item response theory (IRT) models when double ratings are used as item scores over single ratings when rater effects are present. Study 1 examined the influence of the number of ratings on the accuracy of proficiency estimation in the generalized partial credit model (GPCM). Study 2 compared the accuracy of proficiency estimation of two IRT models (GPCM versus the hierarchical rater model, HRM) for double ratings. The main findings were as follows: (a) rater effects substantially reduced the accuracy of IRT proficiency estimation; (b) double ratings relieved the negative impact of rater effects on proficiency estimation and improved the accuracy relative to single ratings; (c) IRT estimators showed different patterns in the conditional accuracy; (d) as more items and a larger number of score categories were used, the accuracy of proficiency estimation improved; and (e) the HRM consistently showed better performance than the GPCM.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用双重评分作为项目得分对IRT能力评估的影响
摘要本文介绍了在存在评分者效应的情况下,当双评分被用作单评分的项目得分时,项目反应理论(IRT)模型的性能。研究1考察了评级数量对广义部分信用模型(GPCM)中熟练程度估计准确性的影响。研究2比较了两种IRT模型(GPCM与分级评分者模型,HRM)对双重评分的熟练度估计的准确性。主要研究结果如下:(a)评分者效应显著降低了IRT能力评估的准确性;(b) 双重评分减轻了评分者效应对能力评估的负面影响,并提高了相对于单一评分的准确性;(c) IRT估计量在条件精度上表现出不同的模式;(d) 随着使用的项目越多,分数类别越多,熟练程度估计的准确性就越高;以及(e)HRM始终显示出比GPCM更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.50
自引率
13.30%
发文量
14
期刊介绍: Because interaction between the domains of research and application is critical to the evaluation and improvement of new educational measurement practices, Applied Measurement in Education" prime objective is to improve communication between academicians and practitioners. To help bridge the gap between theory and practice, articles in this journal describe original research studies, innovative strategies for solving educational measurement problems, and integrative reviews of current approaches to contemporary measurement issues. Peer Review Policy: All review papers in this journal have undergone editorial screening and peer review.
期刊最新文献
New Tests of Rater Drift in Trend Scoring Automated Scoring of Short-Answer Questions: A Progress Report Item and Test Characteristic Curves of Rank-2PL Models for Multidimensional Forced-Choice Questionnaires Impact of violating unidimensionality on Rasch calibration for mixed-format tests Can Adaptive Testing Improve Test-Taking Experience? A Case Study on Educational Survey Assessment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1