首页 > 最新文献

Proceedings of the Second (2015) ACM Conference on Learning @ Scale最新文献

英文 中文
Supporting Instructors in Collaborating with Researchers using MOOClets 支持教师使用moolet与研究人员合作
Pub Date : 2015-02-14 DOI: 10.2139/ssrn.2580666
J. Williams, Juho Kim, Brian Keegan
Most education and workplace learning takes place in classroom contexts far removed from laboratories or field sites with special arrangements for scientific research. But digital online resources provide a novel opportunity for large-scale efforts to bridge the real-world and laboratory settings which support data collection and randomized A/B experiments comparing different versions of content or interactions [2]. However, there are substantial technological and practical barriers in aligning instructors and researchers to use learning technologies like blended lessons/exercises & MOOCs as both a service for students and a realistic context to conduct research. This paper explains how the concept of a "MOOClet" can facilitate research-practitioner collaborations. MOOClets [3] are defined as modular components of a digital resource that can be implemented in technology to: (1) allow modification to create multiple versions, (2) allow experimental comparison and personalization of different versions, (3) reliably specify what data are collected. We suggest a framework in which instructors specify what kinds of changes to lessons, exercises, and emails they would be willing to adopt, and what data they will collect and make available. Researchers can then: (1) specify or design experiments that compare the effects of different versions on quantifiable outcomes. (2) Explore algorithms for maximizing particular outcomes by choosing alternative versions of a MOOClet based on the input variables available. We present a prototype survey tool for instructors intended to facilitate practitioner-researcher matches and successful collaborations.
大多数教育和工作场所的学习都是在远离实验室或有科学研究特别安排的实地场所的课堂环境中进行的。但是,数字在线资源为大规模努力提供了一个新的机会,可以将现实世界和实验室环境连接起来,支持数据收集和随机a /B实验,比较不同版本的内容或交互[2]。然而,在协调教师和研究人员使用混合课程/练习和mooc等学习技术作为学生的服务和进行研究的现实环境方面,存在着巨大的技术和实践障碍。本文解释了“moooclet”的概念如何促进研究与实践者的合作。mooclet[3]被定义为数字资源的模块化组件,可以在技术上实现:(1)允许修改以创建多个版本,(2)允许不同版本的实验比较和个性化,(3)可靠地指定收集的数据。我们建议建立一个框架,在这个框架中,教师指定他们愿意对课程、练习和电子邮件进行哪些更改,以及他们将收集和提供哪些数据。然后,研究人员可以:(1)指定或设计实验,比较不同版本对可量化结果的影响。(2)根据可用的输入变量,通过选择MOOClet的备选版本,探索最大化特定结果的算法。我们为教师提供了一个原型调查工具,旨在促进从业者-研究人员的匹配和成功的合作。
{"title":"Supporting Instructors in Collaborating with Researchers using MOOClets","authors":"J. Williams, Juho Kim, Brian Keegan","doi":"10.2139/ssrn.2580666","DOIUrl":"https://doi.org/10.2139/ssrn.2580666","url":null,"abstract":"Most education and workplace learning takes place in classroom contexts far removed from laboratories or field sites with special arrangements for scientific research. But digital online resources provide a novel opportunity for large-scale efforts to bridge the real-world and laboratory settings which support data collection and randomized A/B experiments comparing different versions of content or interactions [2]. However, there are substantial technological and practical barriers in aligning instructors and researchers to use learning technologies like blended lessons/exercises & MOOCs as both a service for students and a realistic context to conduct research. This paper explains how the concept of a \"MOOClet\" can facilitate research-practitioner collaborations. MOOClets [3] are defined as modular components of a digital resource that can be implemented in technology to: (1) allow modification to create multiple versions, (2) allow experimental comparison and personalization of different versions, (3) reliably specify what data are collected. We suggest a framework in which instructors specify what kinds of changes to lessons, exercises, and emails they would be willing to adopt, and what data they will collect and make available. Researchers can then: (1) specify or design experiments that compare the effects of different versions on quantifiable outcomes. (2) Explore algorithms for maximizing particular outcomes by choosing alternative versions of a MOOClet based on the input variables available. We present a prototype survey tool for instructors intended to facilitate practitioner-researcher matches and successful collaborations.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76037644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Mathematical Language Processing: Automatic Grading and Feedback for Open Response Mathematical Questions 数学语言处理:开放式数学问题的自动评分和反馈
Pub Date : 2015-01-18 DOI: 10.1145/2724660.2724664
Andrew S. Lan, Divyanshu Vats, Andrew E. Waters, Richard Baraniuk
While computer and communication technologies have provided effective means to scale up many aspects of education, the submission and grading of assessments such as homework assignments and tests remains a weak link. In this paper, we study the problem of automatically grading the kinds of open response mathematical questions that figure prominently in STEM (science, technology, engineering, and mathematics) courses. Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors. MLP takes inspiration from the success of natural language processing for text data and comprises three main steps. First, we convert each solution to an open response mathematical question into a series of numerical features. Second, we cluster the features from several solutions to uncover the structures of correct, partially correct, and incorrect solutions. We develop two different clustering approaches, one that leverages generic clustering algorithms and one based on Bayesian nonparametrics. Third, we automatically grade the remaining (potentially large number of) solutions based on their assigned cluster and one instructor-provided grade per cluster. As a bonus, we can track the cluster assignment of each step of a multistep solution and determine when it departs from a cluster of correct solutions, which enables us to indicate the likely locations of errors to learners. We test and validate MLP on real-world MOOC data to demonstrate how it can substantially reduce the human effort required in large-scale educational platforms.
虽然计算机和通信技术为扩大教育的许多方面提供了有效手段,但家庭作业和考试等评估的提交和评分仍然是一个薄弱环节。在本文中,我们研究了在STEM(科学、技术、工程和数学)课程中突出的开放式回答数学问题的自动评分问题。我们的数据驱动数学语言处理(MLP)框架利用来自大量学习者的解决方案数据来评估其解决方案的正确性,分配部分信用分数,并就任何错误的可能位置向每个学习者提供反馈。MLP的灵感来自于文本数据的自然语言处理的成功,它包括三个主要步骤。首先,我们将开放响应数学问题的每个解转换为一系列数值特征。其次,我们将几个解的特征聚类,以揭示正确、部分正确和不正确解的结构。我们开发了两种不同的聚类方法,一种利用通用聚类算法,另一种基于贝叶斯非参数。第三,我们根据分配的分类和每个分类的一个教师提供的评分,自动对剩余的(可能大量的)解决方案进行评分。作为奖励,我们可以跟踪多步解的每一步的聚类分配,并确定它何时偏离正确解的聚类,这使我们能够向学习者指出错误的可能位置。我们在真实的MOOC数据上测试和验证MLP,以证明它如何大大减少大规模教育平台所需的人力。
{"title":"Mathematical Language Processing: Automatic Grading and Feedback for Open Response Mathematical Questions","authors":"Andrew S. Lan, Divyanshu Vats, Andrew E. Waters, Richard Baraniuk","doi":"10.1145/2724660.2724664","DOIUrl":"https://doi.org/10.1145/2724660.2724664","url":null,"abstract":"While computer and communication technologies have provided effective means to scale up many aspects of education, the submission and grading of assessments such as homework assignments and tests remains a weak link. In this paper, we study the problem of automatically grading the kinds of open response mathematical questions that figure prominently in STEM (science, technology, engineering, and mathematics) courses. Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors. MLP takes inspiration from the success of natural language processing for text data and comprises three main steps. First, we convert each solution to an open response mathematical question into a series of numerical features. Second, we cluster the features from several solutions to uncover the structures of correct, partially correct, and incorrect solutions. We develop two different clustering approaches, one that leverages generic clustering algorithms and one based on Bayesian nonparametrics. Third, we automatically grade the remaining (potentially large number of) solutions based on their assigned cluster and one instructor-provided grade per cluster. As a bonus, we can track the cluster assignment of each step of a multistep solution and determine when it departs from a cluster of correct solutions, which enables us to indicate the likely locations of errors to learners. We test and validate MLP on real-world MOOC data to demonstrate how it can substantially reduce the human effort required in large-scale educational platforms.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"124 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77349500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Effective Sampling for Large-scale Automated Writing Evaluation Systems 大规模自动写作评价系统的有效抽样
Pub Date : 2014-12-17 DOI: 10.1145/2724660.2724661
Nicholas Dronen, P. Foltz, Kyle Habermehl
Automated writing evaluation (AWE) has been shown to be an effective mechanism for quickly providing feedback to students. It has already seen wide adoption in enterprise-scale applications and is starting to be adopted in large-scale contexts. Training an AWE model has historically required a single batch of several hundred writing examples and human scores for each of them. This requirement limits large-scale adoption of AWE since human-scoring essays is costly. Here we evaluate algorithms for ensuring that AWE models are consistently trained using the most informative essays. Our results show how to minimize training set sizes while maximizing predictive performance, thereby reducing cost without unduly sacrificing accuracy. We conclude with a discussion of how to integrate this approach into large-scale AWE systems.
自动写作评估(AWE)已被证明是一种快速向学生提供反馈的有效机制。它已经在企业级应用程序中被广泛采用,并开始在大规模环境中被采用。从历史上看,训练一个AWE模型需要几百个写作示例和每个示例的人工分数。这一要求限制了AWE的大规模采用,因为人工评分的成本很高。在这里,我们评估算法,以确保使用最具信息量的文章始终如一地训练AWE模型。我们的结果显示了如何在最大化预测性能的同时最小化训练集大小,从而在不过度牺牲准确性的情况下降低成本。最后,我们讨论了如何将这种方法集成到大规模AWE系统中。
{"title":"Effective Sampling for Large-scale Automated Writing Evaluation Systems","authors":"Nicholas Dronen, P. Foltz, Kyle Habermehl","doi":"10.1145/2724660.2724661","DOIUrl":"https://doi.org/10.1145/2724660.2724661","url":null,"abstract":"Automated writing evaluation (AWE) has been shown to be an effective mechanism for quickly providing feedback to students. It has already seen wide adoption in enterprise-scale applications and is starting to be adopted in large-scale contexts. Training an AWE model has historically required a single batch of several hundred writing examples and human scores for each of them. This requirement limits large-scale adoption of AWE since human-scoring essays is costly. Here we evaluate algorithms for ensuring that AWE models are consistently trained using the most informative essays. Our results show how to minimize training set sizes while maximizing predictive performance, thereby reducing cost without unduly sacrificing accuracy. We conclude with a discussion of how to integrate this approach into large-scale AWE systems.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73843086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
Proceedings of the Second (2015) ACM Conference on Learning @ Scale
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1