Efficient Estimation of Mean Ability Growth Using Vertical Scaling

IF 1.1 4区 教育学 Q3 EDUCATION & EDUCATIONAL RESEARCH Applied Measurement in Education Pub Date : 2021-06-15 DOI:10.1080/08957347.2021.1933981
Jonas Bjermo, Frank Miller
{"title":"Efficient Estimation of Mean Ability Growth Using Vertical Scaling","authors":"Jonas Bjermo, Frank Miller","doi":"10.1080/08957347.2021.1933981","DOIUrl":null,"url":null,"abstract":"ABSTRACT In recent years, the interest in measuring growth in student ability in various subjects between different grades in school has increased. Therefore, good precision in the estimated growth is of importance. This paper aims to compare estimation methods and test designs when it comes to precision and bias of the estimated growth of mean ability between two groups of students that differ substantially. This is performed by a simulation study. One- and two-parameter item response models are assumed and the estimated abilities are vertically scaled using the non-equivalent anchor test design by estimating the abilities in one single run, so-called concurrent calibration. The connection between the test design and the Fisher information is also discussed. The results indicate that the expected a posteriori estimation method is preferred when estimating differences in mean ability between groups. Results also indicate that a test design with common items of medium difficulty leads to better precision, which coincides with previous results from horizontal equating.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"34 1","pages":"163 - 178"},"PeriodicalIF":1.1000,"publicationDate":"2021-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/08957347.2021.1933981","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Measurement in Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/08957347.2021.1933981","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 1

Abstract

ABSTRACT In recent years, the interest in measuring growth in student ability in various subjects between different grades in school has increased. Therefore, good precision in the estimated growth is of importance. This paper aims to compare estimation methods and test designs when it comes to precision and bias of the estimated growth of mean ability between two groups of students that differ substantially. This is performed by a simulation study. One- and two-parameter item response models are assumed and the estimated abilities are vertically scaled using the non-equivalent anchor test design by estimating the abilities in one single run, so-called concurrent calibration. The connection between the test design and the Fisher information is also discussed. The results indicate that the expected a posteriori estimation method is preferred when estimating differences in mean ability between groups. Results also indicate that a test design with common items of medium difficulty leads to better precision, which coincides with previous results from horizontal equating.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用垂直标度有效估计平均能力增长
摘要近年来,人们对衡量不同年级学生在各个科目上的能力增长越来越感兴趣。因此,准确估计增长是非常重要的。本文旨在比较两组学生平均能力增长的估计精度和偏差的估计方法和测试设计,这两组学生的平均能力增长差异很大。这是通过模拟研究进行的。假设单参数和双参数项目响应模型,并通过在一次运行中估计能力,即所谓的并行校准,使用非等效锚试验设计对估计的能力进行垂直缩放。还讨论了测试设计与Fisher信息之间的联系。结果表明,当估计组之间平均能力的差异时,期望后验估计方法是优选的。结果还表明,具有中等难度的常见项目的测试设计具有更好的精度,这与以前水平等值的结果一致。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.50
自引率
13.30%
发文量
14
期刊介绍: Because interaction between the domains of research and application is critical to the evaluation and improvement of new educational measurement practices, Applied Measurement in Education" prime objective is to improve communication between academicians and practitioners. To help bridge the gap between theory and practice, articles in this journal describe original research studies, innovative strategies for solving educational measurement problems, and integrative reviews of current approaches to contemporary measurement issues. Peer Review Policy: All review papers in this journal have undergone editorial screening and peer review.
期刊最新文献
New Tests of Rater Drift in Trend Scoring Automated Scoring of Short-Answer Questions: A Progress Report Item and Test Characteristic Curves of Rank-2PL Models for Multidimensional Forced-Choice Questionnaires Impact of violating unidimensionality on Rasch calibration for mixed-format tests Can Adaptive Testing Improve Test-Taking Experience? A Case Study on Educational Survey Assessment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1