Comparing the Robustness of Three Nonparametric DIF Procedures to Differential Rapid Guessing

IF 1.1 4区 教育学 Q3 EDUCATION & EDUCATIONAL RESEARCH Applied Measurement in Education Pub Date : 2022-04-03 DOI:10.1080/08957347.2022.2067542
Mohammed A. A. Abulela, Joseph A. Rios
{"title":"Comparing the Robustness of Three Nonparametric DIF Procedures to Differential Rapid Guessing","authors":"Mohammed A. A. Abulela, Joseph A. Rios","doi":"10.1080/08957347.2022.2067542","DOIUrl":null,"url":null,"abstract":"ABSTRACT When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the Mantel-Haenszel (MH), standardization index (STD), and logistic regression (LR) differential item functioning (DIF) procedures to type I error in the presence of differential RG. Sample size, test difficulty, group impact, and differential RG rates were manipulated. Findings revealed that the LR procedure was completely robust to type I errors, while slightly elevated false positive rates (< 1%) were observed for the MH and STD procedures. An applied analysis examining data from the Programme for International Student Assessment showed minimal differences in DIF classifications when comparing data in which RG responses were unfiltered and filtered. These results suggest that large rates of differences in RG rates between subgroups are unassociated with false positive classifications of DIF.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"35 1","pages":"81 - 94"},"PeriodicalIF":1.1000,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Measurement in Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/08957347.2022.2067542","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

ABSTRACT When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the Mantel-Haenszel (MH), standardization index (STD), and logistic regression (LR) differential item functioning (DIF) procedures to type I error in the presence of differential RG. Sample size, test difficulty, group impact, and differential RG rates were manipulated. Findings revealed that the LR procedure was completely robust to type I errors, while slightly elevated false positive rates (< 1%) were observed for the MH and STD procedures. An applied analysis examining data from the Programme for International Student Assessment showed minimal differences in DIF classifications when comparing data in which RG responses were unfiltered and filtered. These results suggest that large rates of differences in RG rates between subgroups are unassociated with false positive classifications of DIF.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
三种非参数DIF程序与差分快速猜测的鲁棒性比较
摘要:当考生的考试成绩没有相关的个人后果时,快速猜测(RG)是一个令人担忧的问题,并且在不同的亚组之间可能有所不同。到目前为止,微分RG对项目级测量不变性的影响很少受到关注。为此,进行了一项模拟研究,以检验Mantel Haenszel(MH)、标准化指数(STD)和逻辑回归(LR)差异项目功能(DIF)程序在存在差异RG的情况下对I型错误的稳健性。对样本量、测试难度、群体影响和RG差异率进行了处理。研究结果显示,LR程序对I型错误完全稳健,而MH和STD程序的假阳性率略有上升(<1%)。一项对国际学生评估计划数据进行的应用分析显示,在比较RG回复未经过滤和过滤的数据时,DIF分类差异最小。这些结果表明,亚组之间RG率的大差异率与DIF的假阳性分类无关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.50
自引率
13.30%
发文量
14
期刊介绍: Because interaction between the domains of research and application is critical to the evaluation and improvement of new educational measurement practices, Applied Measurement in Education" prime objective is to improve communication between academicians and practitioners. To help bridge the gap between theory and practice, articles in this journal describe original research studies, innovative strategies for solving educational measurement problems, and integrative reviews of current approaches to contemporary measurement issues. Peer Review Policy: All review papers in this journal have undergone editorial screening and peer review.
期刊最新文献
New Tests of Rater Drift in Trend Scoring Automated Scoring of Short-Answer Questions: A Progress Report Item and Test Characteristic Curves of Rank-2PL Models for Multidimensional Forced-Choice Questionnaires Impact of violating unidimensionality on Rasch calibration for mixed-format tests Can Adaptive Testing Improve Test-Taking Experience? A Case Study on Educational Survey Assessment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1