Automatic Assessment of Complex Assignments using Topic Models

Saar Kuzi, W. Cope, D. Ferguson, Chase Geigle, Chengxiang Zhai
{"title":"Automatic Assessment of Complex Assignments using Topic Models","authors":"Saar Kuzi, W. Cope, D. Ferguson, Chase Geigle, Chengxiang Zhai","doi":"10.1145/3330430.3333615","DOIUrl":null,"url":null,"abstract":"Automated assessment of complex assignments is crucial for scaling up learning of complex skills such as critical thinking. To address this challenge, one previous work has applied supervised machine learning to automate the assessment by learning from examples of graded assignments by humans. However, in the previous work, only simple lexical features, such as words or n-grams, have been used. In this paper, we propose to use topics as features for this task, which are more interpretable than those simple lexical features and can also address polysemy and synonymy of lexical semantics. The topics can be learned automatically from the student assignment data by using a probabilistic topic model. We propose and study multiple approaches to construct topical features and to combine topical features with simple lexical features. We evaluate the proposed methods using clinical case assignments performed by veterinary medicine students. The experimental results show that topical features are generally very effective and can substantially improve performance when added on top of the lexical features. However, their effectiveness is highly sensitive to how the topics are constructed and a combination of topics constructed using multiple views of the text data works the best. Our results also show that combining the prediction results of using different types of topical features and of topical and lexical features is more effective than pooling all features together to form a larger feature space.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"3 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3330430.3333615","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

Automated assessment of complex assignments is crucial for scaling up learning of complex skills such as critical thinking. To address this challenge, one previous work has applied supervised machine learning to automate the assessment by learning from examples of graded assignments by humans. However, in the previous work, only simple lexical features, such as words or n-grams, have been used. In this paper, we propose to use topics as features for this task, which are more interpretable than those simple lexical features and can also address polysemy and synonymy of lexical semantics. The topics can be learned automatically from the student assignment data by using a probabilistic topic model. We propose and study multiple approaches to construct topical features and to combine topical features with simple lexical features. We evaluate the proposed methods using clinical case assignments performed by veterinary medicine students. The experimental results show that topical features are generally very effective and can substantially improve performance when added on top of the lexical features. However, their effectiveness is highly sensitive to how the topics are constructed and a combination of topics constructed using multiple views of the text data works the best. Our results also show that combining the prediction results of using different types of topical features and of topical and lexical features is more effective than pooling all features together to form a larger feature space.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用主题模型的复杂作业自动评估
复杂作业的自动评估对于扩大批判性思维等复杂技能的学习至关重要。为了应对这一挑战,之前的一项工作已经应用了监督式机器学习,通过学习人类评分作业的示例来实现自动化评估。然而,在之前的工作中,只使用了简单的词汇特征,如单词或n-gram。在本文中,我们建议使用主题作为特征来完成这项任务,这比简单的词汇特征更具有可解释性,并且还可以解决词汇语义的多义和同义词问题。通过使用概率主题模型,可以从学生作业数据中自动学习主题。我们提出并研究了多种方法来构建主题特征,并将主题特征与简单的词汇特征结合起来。我们通过兽医学学生完成的临床案例作业来评估所提出的方法。实验结果表明,主题特征通常是非常有效的,并且在词汇特征的基础上添加主题特征可以显著提高性能。但是,它们的有效性对主题的构造方式非常敏感,使用文本数据的多个视图构造主题的组合效果最好。我们的研究结果还表明,结合使用不同类型的主题特征和主题与词汇特征的预测结果比将所有特征集中在一起形成更大的特征空间更有效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Creating a Framework for User-Centered Development and Improvement of Digital Education Teaching UI Design at Global Scales: A Case Study of the Design of Collaborative Capstone Projects for MOOCs Mining Students Pre-instruction Beliefs for Improved Learning Achievements for building a learning community Instructors Desire Student Activity, Literacy, and Video Quality Analytics to Improve Video-based Blended Courses
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1