Assessing Mode Effects of At-Home Testing Without a Randomized Trial

Q3 Social Sciences ETS Research Report Series Pub Date : 2021-07-20 DOI:10.1002/ets2.12323
Sooyeon Kim, Michael Walker
{"title":"Assessing Mode Effects of At-Home Testing Without a Randomized Trial","authors":"Sooyeon Kim,&nbsp;Michael Walker","doi":"10.1002/ets2.12323","DOIUrl":null,"url":null,"abstract":"<p>In this investigation, we used real data to assess potential differential effects associated with taking a test in a test center (TC) versus testing at home using remote proctoring (RP). We used a pseudo-equivalent groups (PEG) approach to examine group equivalence at the item level and the total score level. If our assumption holds that the PEG approach removes between-group ability differences (as measured by the test) reasonably well, then a plausible explanation for any systematic differences in performance between TC and RP groups that remain after applying the PEG approach would be the operation of test mode effects. At the item level, we compared item difficulties estimated using the PEG approach (i.e., adjusting only for ability differences between groups) to those estimated via delta equating (i.e., adjusting for any systematic differences between groups). All tests used in this investigation showed small, nonsystematic differences, providing evidence of trivial effects associated with at-home testing. At the total score level, we linked the RP group scores to the TC group scores after adjusting for group differences using demographic covariates. We then compared the resulting RP group conversion to the original TC group conversion (the criterion in this study). The magnitude of differences between the RP conversion and the TC conversion was small, leading to the same pass/fail decision for most RP examinees. The present analyses seem to suggest little to no mode effects for the tests used in this investigation.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2021 1","pages":"1-21"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ets2.12323","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ETS Research Report Series","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ets2.12323","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 5

Abstract

In this investigation, we used real data to assess potential differential effects associated with taking a test in a test center (TC) versus testing at home using remote proctoring (RP). We used a pseudo-equivalent groups (PEG) approach to examine group equivalence at the item level and the total score level. If our assumption holds that the PEG approach removes between-group ability differences (as measured by the test) reasonably well, then a plausible explanation for any systematic differences in performance between TC and RP groups that remain after applying the PEG approach would be the operation of test mode effects. At the item level, we compared item difficulties estimated using the PEG approach (i.e., adjusting only for ability differences between groups) to those estimated via delta equating (i.e., adjusting for any systematic differences between groups). All tests used in this investigation showed small, nonsystematic differences, providing evidence of trivial effects associated with at-home testing. At the total score level, we linked the RP group scores to the TC group scores after adjusting for group differences using demographic covariates. We then compared the resulting RP group conversion to the original TC group conversion (the criterion in this study). The magnitude of differences between the RP conversion and the TC conversion was small, leading to the same pass/fail decision for most RP examinees. The present analyses seem to suggest little to no mode effects for the tests used in this investigation.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在没有随机试验的情况下评估在家测试的模式效应
在这项调查中,我们使用真实数据来评估在测试中心(TC)与在家使用远程监考(RP)进行测试相关的潜在差异影响。我们使用伪等效组(PEG)方法来检验项目水平和总分水平的组等效性。如果我们的假设认为PEG方法可以很好地消除组间能力差异(通过测试测量),那么对于应用PEG方法后TC组和RP组之间表现的任何系统性差异的合理解释将是测试模式效应的运作。在项目层面,我们比较了使用PEG方法估计的项目难度(即,仅调整小组之间的能力差异)和通过delta方程估计的项目难度(即,调整小组之间的任何系统差异)。在这项调查中使用的所有测试都显示出小的、非系统的差异,提供了与家庭测试相关的微不足道的影响的证据。在总分水平上,我们使用人口统计学协变量调整组差异后,将RP组得分与TC组得分联系起来。然后,我们将RP组的转换结果与原始TC组的转换结果(本研究的标准)进行比较。RP转换和TC转换之间的差异幅度很小,导致大多数RP考生的通过/不通过决定相同。目前的分析似乎表明,在这个调查中使用的测试很少或没有模态效应。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ETS Research Report Series
ETS Research Report Series Social Sciences-Education
CiteScore
1.20
自引率
0.00%
发文量
17
期刊最新文献
Building a Validity Argument for the TOEFL Junior® Tests Validity, Reliability, and Fairness Evidence for the JD‐Next Exam Practical Considerations in Item Calibration With Small Samples Under Multistage Test Design: A Case Study Practical Considerations in Item Calibration With Small Samples Under Multistage Test Design: A Case Study Modeling Writing Traits in a Formative Essay Corpus
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1