Incorporating Test‐Taking Engagement into Multistage Adaptive Testing Design for Large‐Scale Assessments

IF 1.4 4区 心理学 Q3 PSYCHOLOGY, APPLIED Journal of Educational Measurement Pub Date : 2023-11-10 DOI:10.1111/jedm.12380
Okan Bulut, Guher Gorgun, Hacer Karamese
{"title":"Incorporating Test‐Taking Engagement into Multistage Adaptive Testing Design for Large‐Scale Assessments","authors":"Okan Bulut, Guher Gorgun, Hacer Karamese","doi":"10.1111/jedm.12380","DOIUrl":null,"url":null,"abstract":"Abstract The use of multistage adaptive testing (MST) has gradually increased in large‐scale testing programs as MST achieves a balanced compromise between linear test design and item‐level adaptive testing. MST works on the premise that each examinee gives their best effort when attempting the items, and their responses truly reflect what they know or can do. However, research shows that large‐scale assessments may suffer from a lack of test‐taking engagement, especially if they are low stakes. Examinees with low test‐taking engagement are likely to show noneffortful responding (e.g., answering the items very rapidly without reading the item stem or response options). To alleviate the impact of noneffortful responses on the measurement accuracy of MST, test‐taking engagement can be operationalized as a latent trait based on response times and incorporated into the on‐the‐fly module assembly procedure. To demonstrate the proposed approach, a Monte‐Carlo simulation study was conducted based on item parameters from an international large‐scale assessment. The results indicated that the on‐the‐fly module assembly considering both ability and test‐taking engagement could minimize the impact of noneffortful responses, yielding more accurate ability estimates and classifications. Implications for practice and directions for future research were discussed.","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"119 52","pages":"0"},"PeriodicalIF":1.4000,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Educational Measurement","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1111/jedm.12380","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract The use of multistage adaptive testing (MST) has gradually increased in large‐scale testing programs as MST achieves a balanced compromise between linear test design and item‐level adaptive testing. MST works on the premise that each examinee gives their best effort when attempting the items, and their responses truly reflect what they know or can do. However, research shows that large‐scale assessments may suffer from a lack of test‐taking engagement, especially if they are low stakes. Examinees with low test‐taking engagement are likely to show noneffortful responding (e.g., answering the items very rapidly without reading the item stem or response options). To alleviate the impact of noneffortful responses on the measurement accuracy of MST, test‐taking engagement can be operationalized as a latent trait based on response times and incorporated into the on‐the‐fly module assembly procedure. To demonstrate the proposed approach, a Monte‐Carlo simulation study was conducted based on item parameters from an international large‐scale assessment. The results indicated that the on‐the‐fly module assembly considering both ability and test‐taking engagement could minimize the impact of noneffortful responses, yielding more accurate ability estimates and classifications. Implications for practice and directions for future research were discussed.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
将测试参与纳入大规模评估的多阶段自适应测试设计
多阶段自适应测试(MST)的使用在大规模测试项目中逐渐增加,因为MST实现了线性测试设计和项目水平自适应测试之间的平衡妥协。MST的工作前提是每位考生在尝试试题时都尽了最大的努力,他们的回答真实地反映了他们所知道或能做的事情。然而,研究表明,大规模的评估可能会受到缺乏参与考试的影响,特别是如果它们是低赌注的。参与度低的考生可能表现出不费力的反应(例如,在不阅读题干或回答选项的情况下非常快速地回答问题)。为了减轻不费力的反应对MST测量精度的影响,应试参与可以作为基于反应时间的潜在特征进行操作,并纳入实时模块组装程序。为了证明所提出的方法,基于国际大规模评估的项目参数进行了蒙特卡罗模拟研究。结果表明,考虑能力和测试参与的在线模块组装可以最大限度地减少不费力响应的影响,从而产生更准确的能力估计和分类。讨论了实践意义和未来研究方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.30
自引率
7.70%
发文量
46
期刊介绍: The Journal of Educational Measurement (JEM) publishes original measurement research, provides reviews of measurement publications, and reports on innovative measurement applications. The topics addressed will interest those concerned with the practice of measurement in field settings, as well as be of interest to measurement theorists. In addition to presenting new contributions to measurement theory and practice, JEM also serves as a vehicle for improving educational measurement applications in a variety of settings.
期刊最新文献
Sequential Reservoir Computing for Log File‐Based Behavior Process Data Analyses Issue Information Exploring Latent Constructs through Multimodal Data Analysis Robustness of Item Response Theory Models under the PISA Multistage Adaptive Testing Designs Modeling Nonlinear Effects of Person‐by‐Item Covariates in Explanatory Item Response Models: Exploratory Plots and Modeling Using Smooth Functions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1