The automated grading of student open responses in mathematics

John A. Erickson, Anthony F. Botelho, Steven McAteer, A. Varatharaj, N. Heffernan
{"title":"The automated grading of student open responses in mathematics","authors":"John A. Erickson, Anthony F. Botelho, Steven McAteer, A. Varatharaj, N. Heffernan","doi":"10.1145/3375462.3375523","DOIUrl":null,"url":null,"abstract":"The use of computer-based systems in classrooms has provided teachers with new opportunities in delivering content to students, supplementing instruction, and assessing student knowledge and comprehension. Among the largest benefits of these systems is their ability to provide students with feedback on their work and also report student performance and progress to their teacher. While computer-based systems can automatically assess student answers to a range of question types, a limitation faced by many systems is in regard to open-ended problems. Many systems are either unable to provide support for open-ended problems, relying on the teacher to grade them manually, or avoid such question types entirely. Due to recent advancements in natural language processing methods, the automation of essay grading has made notable strides. However, much of this research has pertained to domains outside of mathematics, where the use of open-ended problems can be used by teachers to assess students' understanding of mathematical concepts beyond what is possible on other types of problems. This research explores the viability and challenges of developing automated graders of open-ended student responses in mathematics. We further explore how the scale of available data impacts model performance. Focusing on content delivered through the ASSISTments online learning platform, we present a set of analyses pertaining to the development and evaluation of models to predict teacher-assigned grades for student open responses.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"125 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3375462.3375523","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27

Abstract

The use of computer-based systems in classrooms has provided teachers with new opportunities in delivering content to students, supplementing instruction, and assessing student knowledge and comprehension. Among the largest benefits of these systems is their ability to provide students with feedback on their work and also report student performance and progress to their teacher. While computer-based systems can automatically assess student answers to a range of question types, a limitation faced by many systems is in regard to open-ended problems. Many systems are either unable to provide support for open-ended problems, relying on the teacher to grade them manually, or avoid such question types entirely. Due to recent advancements in natural language processing methods, the automation of essay grading has made notable strides. However, much of this research has pertained to domains outside of mathematics, where the use of open-ended problems can be used by teachers to assess students' understanding of mathematical concepts beyond what is possible on other types of problems. This research explores the viability and challenges of developing automated graders of open-ended student responses in mathematics. We further explore how the scale of available data impacts model performance. Focusing on content delivered through the ASSISTments online learning platform, we present a set of analyses pertaining to the development and evaluation of models to predict teacher-assigned grades for student open responses.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
学生数学开放性回答的自动评分
在教室中使用基于计算机的系统为教师提供了向学生传授内容、补充教学以及评估学生知识和理解能力的新机会。这些系统的最大好处之一是它们能够为学生提供关于他们的工作的反馈,并向老师报告学生的表现和进步。虽然基于计算机的系统可以自动评估学生对一系列问题类型的答案,但许多系统面临的限制是关于开放式问题。许多系统要么无法为开放式问题提供支持,依靠老师手动评分,要么完全避免这类问题。由于自然语言处理方法的最新进展,论文评分的自动化取得了显着的进步。然而,这方面的许多研究都与数学以外的领域有关,在这些领域中,教师可以使用开放式问题来评估学生对数学概念的理解,而不是在其他类型的问题上。本研究探讨了开发开放式学生数学反应自动评分的可行性和挑战。我们进一步探讨了可用数据的规模如何影响模型性能。专注于通过ASSISTments在线学习平台交付的内容,我们提出了一组与模型开发和评估有关的分析,以预测学生公开回答的教师分配分数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
How working memory capacity limits success in self-directed learning: a cognitive model of search and concept formation Quantifying data sensitivity: precise demonstration of care when building student prediction models Towards automated analysis of cognitive presence in MOOC discussions: a manual classification study Towards automatic cross-language classification of cognitive presence in online discussions What college students say, and what they do: aligning self-regulated learning theory with behavioral logs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1