ROAR-CAT: Rapid Online Assessment of Reading ability with Computerized Adaptive Testing.

IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Behavior Research Methods Pub Date : 2025-01-14 DOI:10.3758/s13428-024-02578-y
Wanjing Anya Ma, Adam Richie-Halford, Amy K Burkhardt, Klint Kanopka, Clementine Chou, Benjamin W Domingue, Jason D Yeatman
{"title":"ROAR-CAT: Rapid Online Assessment of Reading ability with Computerized Adaptive Testing.","authors":"Wanjing Anya Ma, Adam Richie-Halford, Amy K Burkhardt, Klint Kanopka, Clementine Chou, Benjamin W Domingue, Jason D Yeatman","doi":"10.3758/s13428-024-02578-y","DOIUrl":null,"url":null,"abstract":"<p><p>The Rapid Online Assessment of Reading (ROAR) is a web-based lexical decision task that measures single-word reading abilities in children and adults without a proctor. Here we study whether item response theory (IRT) and computerized adaptive testing (CAT) can be used to create a more efficient online measure of word recognition. To construct an item bank, we first analyzed data taken from four groups of students (N = 1960) who differed in age, socioeconomic status, and language-based learning disabilities. The majority of item parameters were highly consistent across groups (r = .78-.94), and six items that functioned differently across groups were removed. Next, we implemented a JavaScript CAT algorithm and conducted a validation experiment with 485 students in grades 1-8 who were randomly assigned to complete trials of all items in the item bank in either (a) a random order or (b) a CAT order. We found that, to achieve reliability of 0.9, CAT improved test efficiency by 40%: 75 CAT items produced the same standard error of measurement as 125 items in a random order. Subsequent validation in 32 public school classrooms showed that an approximately 3-min ROAR-CAT can achieve high correlations (r = .89 for first grade, r = .73 for second grade) with alternative 5-15-min individually proctored oral reading assessments. Our findings suggest that ROAR-CAT is a promising tool for efficiently and accurately measuring single-word reading ability. Furthermore, our development process serves as a model for creating adaptive online assessments that bridge research and practice.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"56"},"PeriodicalIF":4.6000,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11732908/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-024-02578-y","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

The Rapid Online Assessment of Reading (ROAR) is a web-based lexical decision task that measures single-word reading abilities in children and adults without a proctor. Here we study whether item response theory (IRT) and computerized adaptive testing (CAT) can be used to create a more efficient online measure of word recognition. To construct an item bank, we first analyzed data taken from four groups of students (N = 1960) who differed in age, socioeconomic status, and language-based learning disabilities. The majority of item parameters were highly consistent across groups (r = .78-.94), and six items that functioned differently across groups were removed. Next, we implemented a JavaScript CAT algorithm and conducted a validation experiment with 485 students in grades 1-8 who were randomly assigned to complete trials of all items in the item bank in either (a) a random order or (b) a CAT order. We found that, to achieve reliability of 0.9, CAT improved test efficiency by 40%: 75 CAT items produced the same standard error of measurement as 125 items in a random order. Subsequent validation in 32 public school classrooms showed that an approximately 3-min ROAR-CAT can achieve high correlations (r = .89 for first grade, r = .73 for second grade) with alternative 5-15-min individually proctored oral reading assessments. Our findings suggest that ROAR-CAT is a promising tool for efficiently and accurately measuring single-word reading ability. Furthermore, our development process serves as a model for creating adaptive online assessments that bridge research and practice.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
咆哮- cat:快速在线评估阅读能力与计算机化的自适应测试。
快速在线阅读评估(ROAR)是一个基于网络的词汇决策任务,在没有监考的情况下测量儿童和成人的单字阅读能力。本文研究了项目反应理论(IRT)和计算机自适应测试(CAT)是否可以用于创建一个更有效的在线单词识别测试。为了构建题库,我们首先分析了年龄、社会经济地位和语言学习障碍不同的四组学生(N = 1960)的数据。大多数项目参数在组间高度一致(r = 0.78 - 0.94), 6个在组间功能不同的项目被删除。接下来,我们实现了一个JavaScript CAT算法,并对485名1-8年级的学生进行了验证实验,他们被随机分配以(a)随机顺序或(b) CAT顺序完成题库中所有项目的试验。我们发现,为了达到0.9的信度,CAT将测试效率提高了40%:75个CAT项目产生的测量标准误差与随机顺序的125个项目相同。随后在32个公立学校教室进行的验证表明,大约3分钟的ROAR-CAT可以获得高相关性(r =。一年级为89,r =。二年级73分),另外还有5-15分钟的口头阅读评估。我们的研究结果表明,咆哮- cat是一个有前途的工具,有效地和准确地测量单字阅读能力。此外,我们的开发过程可作为创建自适应在线评估的模型,将研究与实践联系起来。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.30
自引率
9.30%
发文量
266
期刊介绍: Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.
期刊最新文献
Testing for group differences in multilevel vector autoregressive models. Distribution-free Bayesian analyses with the DFBA statistical package. Jiwar: A database and calculator for word neighborhood measures in 40 languages. Open-access network science: Investigating phonological similarity networks based on the SUBTLEX-US lexicon. Survey measures of metacognitive monitoring are often false.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1