Comparisons Among Approaches to Link Tests Using Random Samples Selected Under Suboptimal Conditions

Q3 Social Sciences ETS Research Report Series Pub Date : 2021-08-11 DOI:10.1002/ets2.12328
Sooyeon Kim, Michael E. Walker
{"title":"Comparisons Among Approaches to Link Tests Using Random Samples Selected Under Suboptimal Conditions","authors":"Sooyeon Kim,&nbsp;Michael E. Walker","doi":"10.1002/ets2.12328","DOIUrl":null,"url":null,"abstract":"<p>Equating the scores from different forms of a test requires collecting data that link the forms. Problems arise when the test forms to be linked are given to groups that are not equivalent and the forms share no common items by which to measure or adjust for this group nonequivalence. We compared three approaches to adjusting for group nonequivalence in a situation where not only is randomization questionable, but the number of common items is small. Group adjustment through either subgroup weighting, a weak anchor, or a mix of both was evaluated in terms of linking accuracy using a resampling approach. We used data from a single test form to create two research forms for which the equating relationship was known. The results showed that both subgroup weighting and weak anchor approaches produced nearly equivalent linking results when group equivalence was not met. Direct (random groups) linking methods produced the least accurate result due to nontrivial bias. Use of subgroup weighting and linking using the anchor test only marginally improved linking accuracy compared to using the weak anchor alone when the degree of group nonequivalence was small.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2021 1","pages":"1-20"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ets2.12328","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ETS Research Report Series","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ets2.12328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 1

Abstract

Equating the scores from different forms of a test requires collecting data that link the forms. Problems arise when the test forms to be linked are given to groups that are not equivalent and the forms share no common items by which to measure or adjust for this group nonequivalence. We compared three approaches to adjusting for group nonequivalence in a situation where not only is randomization questionable, but the number of common items is small. Group adjustment through either subgroup weighting, a weak anchor, or a mix of both was evaluated in terms of linking accuracy using a resampling approach. We used data from a single test form to create two research forms for which the equating relationship was known. The results showed that both subgroup weighting and weak anchor approaches produced nearly equivalent linking results when group equivalence was not met. Direct (random groups) linking methods produced the least accurate result due to nontrivial bias. Use of subgroup weighting and linking using the anchor test only marginally improved linking accuracy compared to using the weak anchor alone when the degree of group nonequivalence was small.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
次优条件下随机样本链接测试方法的比较
要使不同形式的考试成绩相等,就需要收集各种形式之间的数据。当要链接的测试表格被分配给不相等的组时,问题就出现了,并且这些表格没有共同的项目来衡量或调整这种组的不相等性。我们比较了三种方法来调整群体不等效的情况下,不仅是随机的问题,但共同项目的数量是小的。通过子组加权、弱锚或两者的混合进行组调整,使用重新抽样方法评估连接准确性。我们使用来自单一测试表格的数据来创建两个已知相等关系的研究表格。结果表明,在不满足群等价的情况下,子群加权法和弱锚法的链接结果几乎相等。由于非平凡偏差,直接(随机组)链接方法产生的结果最不准确。当分组不等价程度较小时,与单独使用弱锚相比,使用子组加权和锚试验连接仅略微提高了连接精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ETS Research Report Series
ETS Research Report Series Social Sciences-Education
CiteScore
1.20
自引率
0.00%
发文量
17
期刊最新文献
Building a Validity Argument for the TOEFL Junior® Tests Validity, Reliability, and Fairness Evidence for the JD‐Next Exam Practical Considerations in Item Calibration With Small Samples Under Multistage Test Design: A Case Study Practical Considerations in Item Calibration With Small Samples Under Multistage Test Design: A Case Study Modeling Writing Traits in a Formative Essay Corpus
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1