首页 > 最新文献

Proceedings of the Second (2015) ACM Conference on Learning @ Scale最新文献

英文 中文
Using and Designing Platforms for In Vivo Educational Experiments 活体教学实验平台的使用与设计
Pub Date : 2015-02-14 DOI: 10.1145/2724660.2728704
J. Williams, Korinn S. Ostrow, Xiaolu Xiong, Elena L. Glassman, Juho Kim, Samuel G. Maldonado, Na Li, J. Reich, N. Heffernan
In contrast to typical laboratory experiments, the everyday use of online educational resources by large populations and the prevalence of software infrastructure for A/B testing leads us to consider how platforms can embed in vivo experiments that do not merely support research, but ensure practical improvements to their educational components. Examples are presented of randomized experimental comparisons conducted by subsets of the authors in three widely used online educational platforms -- Khan Academy, edX, and ASSISTments. We suggest design principles for platform technology to support randomized experiments that lead to practical improvements -- enabling Iterative Improvement and Collaborative Work -- and explain the benefit of their implementation by WPI co-authors in the ASSISTments platform.
与典型的实验室实验相比,大量人群日常使用在线教育资源以及A/B测试软件基础设施的普及使我们考虑平台如何嵌入体内实验,这些实验不仅支持研究,而且确保对其教育组件进行实际改进。在三个广泛使用的在线教育平台——可汗学院、edX和ASSISTments上,作者的子集进行了随机实验比较。我们建议平台技术的设计原则,以支持导致实际改进的随机实验——使迭代改进和协作工作成为可能——并解释WPI共同作者在ASSISTments平台中实现它们的好处。
{"title":"Using and Designing Platforms for In Vivo Educational Experiments","authors":"J. Williams, Korinn S. Ostrow, Xiaolu Xiong, Elena L. Glassman, Juho Kim, Samuel G. Maldonado, Na Li, J. Reich, N. Heffernan","doi":"10.1145/2724660.2728704","DOIUrl":"https://doi.org/10.1145/2724660.2728704","url":null,"abstract":"In contrast to typical laboratory experiments, the everyday use of online educational resources by large populations and the prevalence of software infrastructure for A/B testing leads us to consider how platforms can embed in vivo experiments that do not merely support research, but ensure practical improvements to their educational components. Examples are presented of randomized experimental comparisons conducted by subsets of the authors in three widely used online educational platforms -- Khan Academy, edX, and ASSISTments. We suggest design principles for platform technology to support randomized experiments that lead to practical improvements -- enabling Iterative Improvement and Collaborative Work -- and explain the benefit of their implementation by WPI co-authors in the ASSISTments platform.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88295440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Mathematical Language Processing: Automatic Grading and Feedback for Open Response Mathematical Questions 数学语言处理:开放式数学问题的自动评分和反馈
Pub Date : 2015-01-18 DOI: 10.1145/2724660.2724664
Andrew S. Lan, Divyanshu Vats, Andrew E. Waters, Richard Baraniuk
While computer and communication technologies have provided effective means to scale up many aspects of education, the submission and grading of assessments such as homework assignments and tests remains a weak link. In this paper, we study the problem of automatically grading the kinds of open response mathematical questions that figure prominently in STEM (science, technology, engineering, and mathematics) courses. Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors. MLP takes inspiration from the success of natural language processing for text data and comprises three main steps. First, we convert each solution to an open response mathematical question into a series of numerical features. Second, we cluster the features from several solutions to uncover the structures of correct, partially correct, and incorrect solutions. We develop two different clustering approaches, one that leverages generic clustering algorithms and one based on Bayesian nonparametrics. Third, we automatically grade the remaining (potentially large number of) solutions based on their assigned cluster and one instructor-provided grade per cluster. As a bonus, we can track the cluster assignment of each step of a multistep solution and determine when it departs from a cluster of correct solutions, which enables us to indicate the likely locations of errors to learners. We test and validate MLP on real-world MOOC data to demonstrate how it can substantially reduce the human effort required in large-scale educational platforms.
虽然计算机和通信技术为扩大教育的许多方面提供了有效手段,但家庭作业和考试等评估的提交和评分仍然是一个薄弱环节。在本文中,我们研究了在STEM(科学、技术、工程和数学)课程中突出的开放式回答数学问题的自动评分问题。我们的数据驱动数学语言处理(MLP)框架利用来自大量学习者的解决方案数据来评估其解决方案的正确性,分配部分信用分数,并就任何错误的可能位置向每个学习者提供反馈。MLP的灵感来自于文本数据的自然语言处理的成功,它包括三个主要步骤。首先,我们将开放响应数学问题的每个解转换为一系列数值特征。其次,我们将几个解的特征聚类,以揭示正确、部分正确和不正确解的结构。我们开发了两种不同的聚类方法,一种利用通用聚类算法,另一种基于贝叶斯非参数。第三,我们根据分配的分类和每个分类的一个教师提供的评分,自动对剩余的(可能大量的)解决方案进行评分。作为奖励,我们可以跟踪多步解的每一步的聚类分配,并确定它何时偏离正确解的聚类,这使我们能够向学习者指出错误的可能位置。我们在真实的MOOC数据上测试和验证MLP,以证明它如何大大减少大规模教育平台所需的人力。
{"title":"Mathematical Language Processing: Automatic Grading and Feedback for Open Response Mathematical Questions","authors":"Andrew S. Lan, Divyanshu Vats, Andrew E. Waters, Richard Baraniuk","doi":"10.1145/2724660.2724664","DOIUrl":"https://doi.org/10.1145/2724660.2724664","url":null,"abstract":"While computer and communication technologies have provided effective means to scale up many aspects of education, the submission and grading of assessments such as homework assignments and tests remains a weak link. In this paper, we study the problem of automatically grading the kinds of open response mathematical questions that figure prominently in STEM (science, technology, engineering, and mathematics) courses. Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors. MLP takes inspiration from the success of natural language processing for text data and comprises three main steps. First, we convert each solution to an open response mathematical question into a series of numerical features. Second, we cluster the features from several solutions to uncover the structures of correct, partially correct, and incorrect solutions. We develop two different clustering approaches, one that leverages generic clustering algorithms and one based on Bayesian nonparametrics. Third, we automatically grade the remaining (potentially large number of) solutions based on their assigned cluster and one instructor-provided grade per cluster. As a bonus, we can track the cluster assignment of each step of a multistep solution and determine when it departs from a cluster of correct solutions, which enables us to indicate the likely locations of errors to learners. We test and validate MLP on real-world MOOC data to demonstrate how it can substantially reduce the human effort required in large-scale educational platforms.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"124 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77349500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Effective Sampling for Large-scale Automated Writing Evaluation Systems 大规模自动写作评价系统的有效抽样
Pub Date : 2014-12-17 DOI: 10.1145/2724660.2724661
Nicholas Dronen, P. Foltz, Kyle Habermehl
Automated writing evaluation (AWE) has been shown to be an effective mechanism for quickly providing feedback to students. It has already seen wide adoption in enterprise-scale applications and is starting to be adopted in large-scale contexts. Training an AWE model has historically required a single batch of several hundred writing examples and human scores for each of them. This requirement limits large-scale adoption of AWE since human-scoring essays is costly. Here we evaluate algorithms for ensuring that AWE models are consistently trained using the most informative essays. Our results show how to minimize training set sizes while maximizing predictive performance, thereby reducing cost without unduly sacrificing accuracy. We conclude with a discussion of how to integrate this approach into large-scale AWE systems.
自动写作评估(AWE)已被证明是一种快速向学生提供反馈的有效机制。它已经在企业级应用程序中被广泛采用,并开始在大规模环境中被采用。从历史上看,训练一个AWE模型需要几百个写作示例和每个示例的人工分数。这一要求限制了AWE的大规模采用,因为人工评分的成本很高。在这里,我们评估算法,以确保使用最具信息量的文章始终如一地训练AWE模型。我们的结果显示了如何在最大化预测性能的同时最小化训练集大小,从而在不过度牺牲准确性的情况下降低成本。最后,我们讨论了如何将这种方法集成到大规模AWE系统中。
{"title":"Effective Sampling for Large-scale Automated Writing Evaluation Systems","authors":"Nicholas Dronen, P. Foltz, Kyle Habermehl","doi":"10.1145/2724660.2724661","DOIUrl":"https://doi.org/10.1145/2724660.2724661","url":null,"abstract":"Automated writing evaluation (AWE) has been shown to be an effective mechanism for quickly providing feedback to students. It has already seen wide adoption in enterprise-scale applications and is starting to be adopted in large-scale contexts. Training an AWE model has historically required a single batch of several hundred writing examples and human scores for each of them. This requirement limits large-scale adoption of AWE since human-scoring essays is costly. Here we evaluate algorithms for ensuring that AWE models are consistently trained using the most informative essays. Our results show how to minimize training set sizes while maximizing predictive performance, thereby reducing cost without unduly sacrificing accuracy. We conclude with a discussion of how to integrate this approach into large-scale AWE systems.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73843086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
Proceedings of the Second (2015) ACM Conference on Learning @ Scale
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1