Wanjing Anya Ma, Adam Richie-Halford, Amy K Burkhardt, Klint Kanopka, Clementine Chou, Benjamin W Domingue, Jason D Yeatman
{"title":"ROAR-CAT: Rapid Online Assessment of Reading ability with Computerized Adaptive Testing.","authors":"Wanjing Anya Ma, Adam Richie-Halford, Amy K Burkhardt, Klint Kanopka, Clementine Chou, Benjamin W Domingue, Jason D Yeatman","doi":"10.3758/s13428-024-02578-y","DOIUrl":null,"url":null,"abstract":"<p><p>The Rapid Online Assessment of Reading (ROAR) is a web-based lexical decision task that measures single-word reading abilities in children and adults without a proctor. Here we study whether item response theory (IRT) and computerized adaptive testing (CAT) can be used to create a more efficient online measure of word recognition. To construct an item bank, we first analyzed data taken from four groups of students (N = 1960) who differed in age, socioeconomic status, and language-based learning disabilities. The majority of item parameters were highly consistent across groups (r = .78-.94), and six items that functioned differently across groups were removed. Next, we implemented a JavaScript CAT algorithm and conducted a validation experiment with 485 students in grades 1-8 who were randomly assigned to complete trials of all items in the item bank in either (a) a random order or (b) a CAT order. We found that, to achieve reliability of 0.9, CAT improved test efficiency by 40%: 75 CAT items produced the same standard error of measurement as 125 items in a random order. Subsequent validation in 32 public school classrooms showed that an approximately 3-min ROAR-CAT can achieve high correlations (r = .89 for first grade, r = .73 for second grade) with alternative 5-15-min individually proctored oral reading assessments. Our findings suggest that ROAR-CAT is a promising tool for efficiently and accurately measuring single-word reading ability. Furthermore, our development process serves as a model for creating adaptive online assessments that bridge research and practice.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"56"},"PeriodicalIF":4.6000,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11732908/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-024-02578-y","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
The Rapid Online Assessment of Reading (ROAR) is a web-based lexical decision task that measures single-word reading abilities in children and adults without a proctor. Here we study whether item response theory (IRT) and computerized adaptive testing (CAT) can be used to create a more efficient online measure of word recognition. To construct an item bank, we first analyzed data taken from four groups of students (N = 1960) who differed in age, socioeconomic status, and language-based learning disabilities. The majority of item parameters were highly consistent across groups (r = .78-.94), and six items that functioned differently across groups were removed. Next, we implemented a JavaScript CAT algorithm and conducted a validation experiment with 485 students in grades 1-8 who were randomly assigned to complete trials of all items in the item bank in either (a) a random order or (b) a CAT order. We found that, to achieve reliability of 0.9, CAT improved test efficiency by 40%: 75 CAT items produced the same standard error of measurement as 125 items in a random order. Subsequent validation in 32 public school classrooms showed that an approximately 3-min ROAR-CAT can achieve high correlations (r = .89 for first grade, r = .73 for second grade) with alternative 5-15-min individually proctored oral reading assessments. Our findings suggest that ROAR-CAT is a promising tool for efficiently and accurately measuring single-word reading ability. Furthermore, our development process serves as a model for creating adaptive online assessments that bridge research and practice.
期刊介绍:
Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.