Towards the Automation of Grading Textual Student Submissions to Open-ended Questions

Jan Philip Bernius, Anna V. Kovaleva, Stephan Krusche, B. Brügge
{"title":"Towards the Automation of Grading Textual Student Submissions to Open-ended Questions","authors":"Jan Philip Bernius, Anna V. Kovaleva, Stephan Krusche, B. Brügge","doi":"10.1145/3396802.3396805","DOIUrl":null,"url":null,"abstract":"Growing student numbers at universities worldwide pose new challenges for instructors. Providing feedback to textual exercises is a challenge in large courses while being important for student's learning success. Exercise submissions and their grading are a primary and individual communication channel between instructors and students. The pure amount of submissions makes it impossible for a single instructor to provide regular feedback to large student bodies. Employing tutors in the process introduces new challenges. Feedback should be consistent and fair for all students. Additionally, interactive teaching models strive for real-time feedback and multiple submissions. We propose a support system for grading textual exercises using an automatic segment-based assessment concept. The system aims at providing suggestions to instructors by reusing previous comments as well as scores. The goal is to reduce the workload for instructors, while at the same time creating timely and consistent feedback to the students. We present the design and a prototypical implementation of an algorithm using topic modeling for segmenting the submissions into smaller blocks. Thereby, the system derives smaller units for assessment and allowing the creation of reusable and structured feedback. We have evaluated the algorithm qualitatively by comparing automatically produced segments with manually produced segments created by humans. The results show that the system can produce topically coherent segments. The segmentation algorithm based on topic modeling is superior to approaches purely based on syntax and punctuation.","PeriodicalId":277576,"journal":{"name":"Proceedings of the 4th European Conference on Software Engineering Education","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th European Conference on Software Engineering Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3396802.3396805","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Growing student numbers at universities worldwide pose new challenges for instructors. Providing feedback to textual exercises is a challenge in large courses while being important for student's learning success. Exercise submissions and their grading are a primary and individual communication channel between instructors and students. The pure amount of submissions makes it impossible for a single instructor to provide regular feedback to large student bodies. Employing tutors in the process introduces new challenges. Feedback should be consistent and fair for all students. Additionally, interactive teaching models strive for real-time feedback and multiple submissions. We propose a support system for grading textual exercises using an automatic segment-based assessment concept. The system aims at providing suggestions to instructors by reusing previous comments as well as scores. The goal is to reduce the workload for instructors, while at the same time creating timely and consistent feedback to the students. We present the design and a prototypical implementation of an algorithm using topic modeling for segmenting the submissions into smaller blocks. Thereby, the system derives smaller units for assessment and allowing the creation of reusable and structured feedback. We have evaluated the algorithm qualitatively by comparing automatically produced segments with manually produced segments created by humans. The results show that the system can produce topically coherent segments. The segmentation algorithm based on topic modeling is superior to approaches purely based on syntax and punctuation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
对学生提交的开放性问题文本评分的自动化
世界各地大学不断增长的学生人数给教师带来了新的挑战。在大型课程中,为文本练习提供反馈是一项挑战,同时对学生的学习成功也很重要。作业提交和评分是教师和学生之间主要的个人沟通渠道。提交的材料数量之多,使得单个教师不可能向大量学生群体提供定期反馈。在这个过程中聘请导师会带来新的挑战。对所有学生的反馈应该是一致和公平的。此外,互动式教学模式力求实时反馈和多次提交。我们提出了一个支持系统,用于评分文本练习使用自动分段为基础的评估概念。该系统旨在通过重复使用以前的评论和分数来为教师提供建议。目标是减少教师的工作量,同时为学生提供及时一致的反馈。我们提出了一种算法的设计和原型实现,该算法使用主题建模将提交内容分割成更小的块。因此,系统派生出更小的评估单元,并允许创建可重用的和结构化的反馈。我们通过比较自动生成的片段和人工生成的片段对算法进行了定性评估。结果表明,该系统能够产生局部相干片段。基于主题建模的分词算法优于单纯基于语法和标点的分词算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Tool-Aided Assessment of Difficulties in Learning Formal Design-by-Contract Assertions Teaching cooperative problem solving Towards the Automation of Grading Textual Student Submissions to Open-ended Questions Totally Different and yet so Alike: Three Concepts to Use Scrum in Higher Education Motivating Adult Learners by Introducing Programming Concepts with Scratch
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1