Online environments introduce unprecedented scale for formal and informal learning communities. In these environments, user-contributed content enables social constructivist approaches to education. In particular, students can help each other by providing hints and suggestions on how to approach problems, by rating each other's suggestions, and by engaging in discussions about the questions. In addition, students can also learn through composing their own questions. Furthermore, with grounding in Item Response Theory, data mining and statistical student models can assess questions and hints for their quality and effectiveness. As a result, internet-scale learning environments allow us to move from simple, canned quizzing systems to a new model where automated, data-driven analysis continuously assesses and refines the quality of teaching material. Our poster describes a framework and prototype of an online drill-and-practice system that leverages user-contributed content and large-scale data to organically improve itself.
{"title":"Adaptive and social mechanisms for automated improvement of eLearning materials","authors":"K. Buffardi, S. Edwards","doi":"10.1145/2556325.2567861","DOIUrl":"https://doi.org/10.1145/2556325.2567861","url":null,"abstract":"Online environments introduce unprecedented scale for formal and informal learning communities. In these environments, user-contributed content enables social constructivist approaches to education. In particular, students can help each other by providing hints and suggestions on how to approach problems, by rating each other's suggestions, and by engaging in discussions about the questions. In addition, students can also learn through composing their own questions. Furthermore, with grounding in Item Response Theory, data mining and statistical student models can assess questions and hints for their quality and effectiveness. As a result, internet-scale learning environments allow us to move from simple, canned quizzing systems to a new model where automated, data-driven analysis continuously assesses and refines the quality of teaching material. Our poster describes a framework and prototype of an online drill-and-practice system that leverages user-contributed content and large-scale data to organically improve itself.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86087591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Brooks, S. Basu, Charles Jacobs, Lucy Vanderwende
In comparison to multiple choice or other recognition-oriented forms of assessment, short answer questions have been shown to offer greater value for both students and teachers; for students they can improve retention of knowledge, while for teachers they provide more insight into student understanding. Unfortunately, the same open-ended nature which makes them so valuable also makes them more difficult to grade at scale. To address this, we propose a cluster-based interface that allows teachers to read, grade, and provide feedback on large groups of answers at once. We evaluated this interface against an unclustered baseline in a within-subjects study with 25 teachers, and found that the clustered interface allows teachers to grade substantially faster, to give more feedback to students, and to develop a high-level view of students' understanding and misconceptions.
{"title":"Divide and correct: using clusters to grade short answers at scale","authors":"Michael Brooks, S. Basu, Charles Jacobs, Lucy Vanderwende","doi":"10.1145/2556325.2566243","DOIUrl":"https://doi.org/10.1145/2556325.2566243","url":null,"abstract":"In comparison to multiple choice or other recognition-oriented forms of assessment, short answer questions have been shown to offer greater value for both students and teachers; for students they can improve retention of knowledge, while for teachers they provide more insight into student understanding. Unfortunately, the same open-ended nature which makes them so valuable also makes them more difficult to grade at scale. To address this, we propose a cluster-based interface that allows teachers to read, grade, and provide feedback on large groups of answers at once. We evaluated this interface against an unclustered baseline in a within-subjects study with 25 teachers, and found that the clustered interface allows teachers to grade substantially faster, to give more feedback to students, and to develop a high-level view of students' understanding and misconceptions.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76199552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juho Kim, Philip J. Guo, Daniel T. Seaton, Piotr Mitros, Krzysztof Z Gajos, Rob Miller
With thousands of learners watching the same online lecture videos, analyzing video watching patterns provides a unique opportunity to understand how students learn with videos. This paper reports a large-scale analysis of in-video dropout and peaks in viewership and student activity, using second-by-second user interaction data from 862 videos in four Massive Open Online Courses (MOOCs) on edX. We find higher dropout rates in longer videos, re-watching sessions (vs first-time), and tutorials (vs lectures). Peaks in re-watching sessions and play events indicate points of interest and confusion. Results show that tutorials (vs lectures) and re-watching sessions (vs first-time) lead to more frequent and sharper peaks. In attempting to reason why peaks occur by sampling 80 videos, we observe that 61% of the peaks accompany visual transitions in the video, e.g., a slide view to a classroom view. Based on this observation, we identify five student activity patterns that can explain peaks: starting from the beginning of a new material, returning to missed content, following a tutorial step, replaying a brief segment, and repeating a non-visual explanation. Our analysis has design implications for video authoring, editing, and interface design, providing a richer understanding of video learning on MOOCs.
{"title":"Understanding in-video dropouts and interaction peaks inonline lecture videos","authors":"Juho Kim, Philip J. Guo, Daniel T. Seaton, Piotr Mitros, Krzysztof Z Gajos, Rob Miller","doi":"10.1145/2556325.2566237","DOIUrl":"https://doi.org/10.1145/2556325.2566237","url":null,"abstract":"With thousands of learners watching the same online lecture videos, analyzing video watching patterns provides a unique opportunity to understand how students learn with videos. This paper reports a large-scale analysis of in-video dropout and peaks in viewership and student activity, using second-by-second user interaction data from 862 videos in four Massive Open Online Courses (MOOCs) on edX. We find higher dropout rates in longer videos, re-watching sessions (vs first-time), and tutorials (vs lectures). Peaks in re-watching sessions and play events indicate points of interest and confusion. Results show that tutorials (vs lectures) and re-watching sessions (vs first-time) lead to more frequent and sharper peaks. In attempting to reason why peaks occur by sampling 80 videos, we observe that 61% of the peaks accompany visual transitions in the video, e.g., a slide view to a classroom view. Based on this observation, we identify five student activity patterns that can explain peaks: starting from the beginning of a new material, returning to missed content, following a tutorial step, replaying a brief segment, and repeating a non-visual explanation. Our analysis has design implications for video authoring, editing, and interface design, providing a richer understanding of video learning on MOOCs.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79746861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
UC Berkeley's CS10 course captures high-definition lectures featuring a unique overlay of the professor over slides. This paper is a brief overview of the demo we presented at L@S 2014. We'll also go into other forms of video we incorporate into the class. Finally, we'll present tips and tricks we've learned in both the pre-production and production stages of the video process.
{"title":"L@S 2014 demo: best practices for MOOC video","authors":"Daniel D. Garcia, Michael A. Ball, Aatash Parikh","doi":"10.1145/2556325.2567889","DOIUrl":"https://doi.org/10.1145/2556325.2567889","url":null,"abstract":"UC Berkeley's CS10 course captures high-definition lectures featuring a unique overlay of the professor over slides. This paper is a brief overview of the demo we presented at L@S 2014. We'll also go into other forms of video we incorporate into the class. Finally, we'll present tips and tricks we've learned in both the pre-production and production stages of the video process.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74045184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coding style is important to teach to beginning programmers, so that bad habits don't become permanent. This is often done manually at the University level because automated Python static analyzers cannot accurately grade based on a given rubric. However, even manual analysis of coding style encounters problems, as we have seen quite a bit of inconsistency among our graders. We introduce ACCE--Automated Coding Composition Evaluator--a module that automates grading for the composition of programs. ACCE, given certain constraints, assesses the composition of a program through static analysis, conversion from code to AST, and clustering (unsupervised learning), helping automate the subjective process of grading based on style and identifying common mistakes. Further, we create visual representations of the clusters to allow readers and students understand where a submission falls, and the overall trends. We have applied this tool to CS61A--a CS1 level course at UC, Berkeley experiencing rapid growth in student enrollment--in an attempt to help expedite the involved process as well as reduce human grader inconsistencies.
{"title":"ACCE: automatic coding composition evaluator","authors":"S. Rogers, Steven Tang, J. Canny","doi":"10.1145/2556325.2567876","DOIUrl":"https://doi.org/10.1145/2556325.2567876","url":null,"abstract":"Coding style is important to teach to beginning programmers, so that bad habits don't become permanent. This is often done manually at the University level because automated Python static analyzers cannot accurately grade based on a given rubric. However, even manual analysis of coding style encounters problems, as we have seen quite a bit of inconsistency among our graders. We introduce ACCE--Automated Coding Composition Evaluator--a module that automates grading for the composition of programs. ACCE, given certain constraints, assesses the composition of a program through static analysis, conversion from code to AST, and clustering (unsupervised learning), helping automate the subjective process of grading based on style and identifying common mistakes. Further, we create visual representations of the clusters to allow readers and students understand where a submission falls, and the overall trends. We have applied this tool to CS61A--a CS1 level course at UC, Berkeley experiencing rapid growth in student enrollment--in an attempt to help expedite the involved process as well as reduce human grader inconsistencies.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85185957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelly Miller, Sacha Zyto, David R Karger, E. Mazur
Discussion forums are an integral part of all online and many offline courses. But in many cases they are presented as an afterthought, offered to the students to use as they wish. In this paper, we explore ways to steer discussion forums to produce high-quality learning interactions. In the context of a Physics course, we investigate two ideas: seeding the forum with prior-year student content, and varying the sizes of "sections" of students who can see each other's comments.
{"title":"Improving online class forums by seeding discussions and managing section size","authors":"Kelly Miller, Sacha Zyto, David R Karger, E. Mazur","doi":"10.1145/2556325.2567866","DOIUrl":"https://doi.org/10.1145/2556325.2567866","url":null,"abstract":"Discussion forums are an integral part of all online and many offline courses. But in many cases they are presented as an afterthought, offered to the students to use as they wish. In this paper, we explore ways to steer discussion forums to produce high-quality learning interactions. In the context of a Physics course, we investigate two ideas: seeding the forum with prior-year student content, and varying the sizes of \"sections\" of students who can see each other's comments.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89060575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tawanna R. Dillahunt, Bingxin Chen, Stephanie D. Teasley
Massive Open Online Courses (MOOCs) are seen as an opportunity for individuals to gain access to education, develop new skills to prepare for high-paying jobs, and achieve upward mobility without incurring the increasingly high debt that comes with a university degree. Despite this perception, few studies have examined whether populations with the most to gain do leverage these resources. We analyzed student demographic information from course surveys and performance data of MOOC participation in a single course. We targeted students who stated that they were motivated to take the course because they "cannot afford to pursue a formal education," and compared them to the group of all other students. Our three key findings are that 1) a higher percentage of non-traditional enrolled students are in this population than the comparison population, 2) in an independent t-test, a statistically significant portion (28%) of this group has less than a 4-year college degree versus 15% of the comparison group, and 3) the completion rate between both groups are relatively equal.
{"title":"Model thinking: demographics and performance of mooc students unable to afford a formal education","authors":"Tawanna R. Dillahunt, Bingxin Chen, Stephanie D. Teasley","doi":"10.1145/2556325.2567851","DOIUrl":"https://doi.org/10.1145/2556325.2567851","url":null,"abstract":"Massive Open Online Courses (MOOCs) are seen as an opportunity for individuals to gain access to education, develop new skills to prepare for high-paying jobs, and achieve upward mobility without incurring the increasingly high debt that comes with a university degree. Despite this perception, few studies have examined whether populations with the most to gain do leverage these resources. We analyzed student demographic information from course surveys and performance data of MOOC participation in a single course. We targeted students who stated that they were motivated to take the course because they \"cannot afford to pursue a formal education,\" and compared them to the group of all other students. Our three key findings are that 1) a higher percentage of non-traditional enrolled students are in this population than the comparison population, 2) in an independent t-test, a statistically significant portion (28%) of this group has less than a 4-year college degree versus 15% of the comparison group, and 3) the completion rate between both groups are relatively equal.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87472423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Student Explorer is an early warning system designed to support academic advising that uses learning analytics to categorize students' ongoing academic performance and effort. Advisors use this tool to provide just-in-time assistance to students at risk of underperforming in their classes. Student Explorer is designed to eventually support targeted advising for thousands of undergraduate students.
{"title":"Student explorer: a tool for supporting academic advising at scale","authors":"Steven Lonn, Stephanie D. Teasley","doi":"10.1145/2556325.2567867","DOIUrl":"https://doi.org/10.1145/2556325.2567867","url":null,"abstract":"Student Explorer is an early warning system designed to support academic advising that uses learning analytics to categorize students' ongoing academic performance and effort. Advisors use this tool to provide just-in-time assistance to students at risk of underperforming in their classes. Student Explorer is designed to eventually support targeted advising for thousands of undergraduate students.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75130989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristin Stephens-Martinez, Marti A. Hearst, A. Fox
For an instructor who is teaching a massive open online course (MOOC), what is the best way to understand their class? What is the best way to view how the students are interacting with the content while the course is running? To help prepare for the next iteration, how should the course's data be best analyzed after the fact? How do these instructional monitoring needs differ between online courses with tens of thousands of students and courses with only tens? This paper reports the results of a survey of 92 MOOC instructors who answered questions about which information they find useful in their course, with the end goal of creating an information display for MOOC instructors. The main findings are: (i) quantitative data sources such as grades, although useful, are not sufficient; understanding the activity in discussion forums and student surveys was rated useful for all use cases by a large majority of respondents, (ii) chat logs were not seen as useful, (iii) for the most part, the same sources of information were seen as useful as found in surveys of smaller online courses, (iv) mockups of existing and novel visualization techniques were responded to positively for use both while the course is running and for planning a revision of the course, and (v) a wide range of views was expressed about other details.
{"title":"Monitoring MOOCs: which information sources do instructors value?","authors":"Kristin Stephens-Martinez, Marti A. Hearst, A. Fox","doi":"10.1145/2556325.2566246","DOIUrl":"https://doi.org/10.1145/2556325.2566246","url":null,"abstract":"For an instructor who is teaching a massive open online course (MOOC), what is the best way to understand their class? What is the best way to view how the students are interacting with the content while the course is running? To help prepare for the next iteration, how should the course's data be best analyzed after the fact? How do these instructional monitoring needs differ between online courses with tens of thousands of students and courses with only tens? This paper reports the results of a survey of 92 MOOC instructors who answered questions about which information they find useful in their course, with the end goal of creating an information display for MOOC instructors. The main findings are: (i) quantitative data sources such as grades, although useful, are not sufficient; understanding the activity in discussion forums and student surveys was rated useful for all use cases by a large majority of respondents, (ii) chat logs were not seen as useful, (iii) for the most part, the same sources of information were seen as useful as found in surveys of smaller online courses, (iv) mockups of existing and novel visualization techniques were responded to positively for use both while the course is running and for planning a revision of the course, and (v) a wide range of views was expressed about other details.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76362211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research investigates the impact professors, and other instructional staff, have on student content knowledge acquisition in a physical science MOOC offered through the University of Illinois at Urbana-Champaign. An A/B test was used to randomly assign MOOC participants in either a control group (with no instructional interaction) or an intervention group (in which the professor and teaching assistants responded to comments in the discussion and complied summary weekly feedback statements) to identify the differences in learning outcomes, participation rates, and student satisfaction. The study found that instructor intervention had no statistically significant impact on overall completion rates, overall badge acquisition rates, student participation rates, or satisfaction with the course, but did (p<0.05) lead to a higher rate of forum badge completion, an area that was targeted by the intervention.
本研究调查了伊利诺伊大学厄巴纳-香槟分校(University of Illinois at Urbana-Champaign)提供的物理科学MOOC课程中,教授和其他教学人员对学生内容知识获取的影响。使用A/B测试随机分配MOOC参与者到对照组(没有教学互动)或干预组(其中教授和助教在讨论中回应评论并编写总结每周反馈声明),以确定学习成果,参与率和学生满意度的差异。研究发现,教师干预对总体完成率、总体徽章获得率、学生参与率或课程满意度没有统计学意义上的显著影响,但(p<0.05)确实导致更高的论坛徽章完成率,这是干预的目标区域。
{"title":"Do professors matter?: using an a/b test to evaluate the impact of instructor involvement on MOOC student outcomes","authors":"J. Tomkin, D. Charlevoix","doi":"10.1145/2556325.2566245","DOIUrl":"https://doi.org/10.1145/2556325.2566245","url":null,"abstract":"This research investigates the impact professors, and other instructional staff, have on student content knowledge acquisition in a physical science MOOC offered through the University of Illinois at Urbana-Champaign. An A/B test was used to randomly assign MOOC participants in either a control group (with no instructional interaction) or an intervention group (in which the professor and teaching assistants responded to comments in the discussion and complied summary weekly feedback statements) to identify the differences in learning outcomes, participation rates, and student satisfaction. The study found that instructor intervention had no statistically significant impact on overall completion rates, overall badge acquisition rates, student participation rates, or satisfaction with the course, but did (p<0.05) lead to a higher rate of forum badge completion, an area that was targeted by the intervention.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76430245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}