JD-Next 考试的有效性、可靠性和公平性证据

Q3 Social Sciences ETS Research Report Series Pub Date : 2024-04-10 DOI:10.1002/ets2.12378
Steven Holtzman, Jonathan Steinberg, Jonathan Weeks, Christopher Robertson, Jessica Findley, David M Klieger
{"title":"JD-Next 考试的有效性、可靠性和公平性证据","authors":"Steven Holtzman, Jonathan Steinberg, Jonathan Weeks, Christopher Robertson, Jessica Findley, David M Klieger","doi":"10.1002/ets2.12378","DOIUrl":null,"url":null,"abstract":"At a time when institutions of higher education are exploring alternatives to traditional admissions testing, institutions are also seeking to better support students and prepare them for academic success. Under such an engaged model, one may seek to measure not just the accumulated knowledge and skills that students would bring to a new academic program but also their ability to grow and learn through the academic program. To help prepare students for law school before they matriculate, the JD‐Next is a fully online, noncredit, 7‐ to 10‐week course to train potential juris doctor students in case reading and analysis skills. This study builds on the work presented for previous JD‐Next cohorts by introducing new scoring and reliability estimation methodologies based on a recent redesign of the assessment for the 2021 cohort, and it presents updated validity and fairness findings using first‐year grades, rather than merely first‐semester grades as in prior cohorts. Results support the claim that the JD‐Next exam is reliable and valid for predicting law school success, providing a statistically significant increase in predictive power over baseline models, including entrance exam scores and grade point averages. In terms of fairness across racial and ethnic groups, smaller score disparities are found with JD‐Next than with traditional admissions assessments, and the assessment is shown to be equally predictive for students from underrepresented minority groups and for first‐generation students. These findings, in conjunction with those from previous research, support the use of the JD‐Next exam for both preparing and admitting future law school students.","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"7 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Validity, Reliability, and Fairness Evidence for the JD‐Next Exam\",\"authors\":\"Steven Holtzman, Jonathan Steinberg, Jonathan Weeks, Christopher Robertson, Jessica Findley, David M Klieger\",\"doi\":\"10.1002/ets2.12378\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"At a time when institutions of higher education are exploring alternatives to traditional admissions testing, institutions are also seeking to better support students and prepare them for academic success. Under such an engaged model, one may seek to measure not just the accumulated knowledge and skills that students would bring to a new academic program but also their ability to grow and learn through the academic program. To help prepare students for law school before they matriculate, the JD‐Next is a fully online, noncredit, 7‐ to 10‐week course to train potential juris doctor students in case reading and analysis skills. This study builds on the work presented for previous JD‐Next cohorts by introducing new scoring and reliability estimation methodologies based on a recent redesign of the assessment for the 2021 cohort, and it presents updated validity and fairness findings using first‐year grades, rather than merely first‐semester grades as in prior cohorts. Results support the claim that the JD‐Next exam is reliable and valid for predicting law school success, providing a statistically significant increase in predictive power over baseline models, including entrance exam scores and grade point averages. In terms of fairness across racial and ethnic groups, smaller score disparities are found with JD‐Next than with traditional admissions assessments, and the assessment is shown to be equally predictive for students from underrepresented minority groups and for first‐generation students. These findings, in conjunction with those from previous research, support the use of the JD‐Next exam for both preparing and admitting future law school students.\",\"PeriodicalId\":11972,\"journal\":{\"name\":\"ETS Research Report Series\",\"volume\":\"7 5\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ETS Research Report Series\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/ets2.12378\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ETS Research Report Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/ets2.12378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

摘要

当高等教育机构在探索传统入学考试的替代方案时,他们也在寻求更好地支持学生,为他们的学业成功做好准备。在这种参与模式下,我们可能不仅要衡量学生在新的学术项目中积累的知识和技能,还要衡量他们在学术项目中成长和学习的能力。为了帮助学生在入学前为法学院做好准备,JD-Next 是一门完全在线的非学分课程,为期 7 到 10 周,旨在培训潜在法学博士学生的案例阅读和分析技能。本研究在前几届 JD-Next 课程的基础上,根据最近为 2021 届学生重新设计的评估,引入了新的评分和信度估计方法,并使用第一年的成绩,而不仅仅是前几届学生第一学期的成绩,对有效性和公平性进行了更新。结果证明,JD-Next 考试在预测法学院学业成功方面是可靠有效的,与基线模型(包括入学考试分数和平均学分绩点)相比,其预测能力在统计学上有显著提高。在不同种族和民族群体之间的公平性方面,JD-Next 考试的分数差距小于传统的招生评估,而且该评估对来自代表人数不足的少数民族群体的学生和第一代学生具有同等的预测能力。这些研究结果与之前的研究结果相结合,支持将 JD-Next 考试用于准备和录取未来的法学院学生。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Validity, Reliability, and Fairness Evidence for the JD‐Next Exam
At a time when institutions of higher education are exploring alternatives to traditional admissions testing, institutions are also seeking to better support students and prepare them for academic success. Under such an engaged model, one may seek to measure not just the accumulated knowledge and skills that students would bring to a new academic program but also their ability to grow and learn through the academic program. To help prepare students for law school before they matriculate, the JD‐Next is a fully online, noncredit, 7‐ to 10‐week course to train potential juris doctor students in case reading and analysis skills. This study builds on the work presented for previous JD‐Next cohorts by introducing new scoring and reliability estimation methodologies based on a recent redesign of the assessment for the 2021 cohort, and it presents updated validity and fairness findings using first‐year grades, rather than merely first‐semester grades as in prior cohorts. Results support the claim that the JD‐Next exam is reliable and valid for predicting law school success, providing a statistically significant increase in predictive power over baseline models, including entrance exam scores and grade point averages. In terms of fairness across racial and ethnic groups, smaller score disparities are found with JD‐Next than with traditional admissions assessments, and the assessment is shown to be equally predictive for students from underrepresented minority groups and for first‐generation students. These findings, in conjunction with those from previous research, support the use of the JD‐Next exam for both preparing and admitting future law school students.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ETS Research Report Series
ETS Research Report Series Social Sciences-Education
CiteScore
1.20
自引率
0.00%
发文量
17
期刊最新文献
Building a Validity Argument for the TOEFL Junior® Tests Validity, Reliability, and Fairness Evidence for the JD‐Next Exam Practical Considerations in Item Calibration With Small Samples Under Multistage Test Design: A Case Study Practical Considerations in Item Calibration With Small Samples Under Multistage Test Design: A Case Study Modeling Writing Traits in a Formative Essay Corpus
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1