Students in online courses generate large amounts of data that can be used to personalize the learning process and improve quality of education. In this paper, we present the Latent Skill Embedding (LSE), a probabilistic model of students and educational content that can be used to recommend personalized sequences of lessons with the goal of helping students prepare for specific assessments. Akin to collaborative filtering for recommender systems, the algorithm does not require students or content to be described by features, but it learns a representation using access traces. We formulate this problem as a regularized maximum-likelihood embedding of students, lessons, and assessments from historical student-content interactions. Empirical findings on large-scale data from Knewton, an adaptive learning technology company, show that this approach predicts assessment results competitively with benchmark models and is able to discriminate between lesson sequences that lead to mastery and failure.
{"title":"Learning Student and Content Embeddings for Personalized Lesson Sequence Recommendation","authors":"S. Reddy, I. Labutov, T. Joachims","doi":"10.1145/2876034.2893375","DOIUrl":"https://doi.org/10.1145/2876034.2893375","url":null,"abstract":"Students in online courses generate large amounts of data that can be used to personalize the learning process and improve quality of education. In this paper, we present the Latent Skill Embedding (LSE), a probabilistic model of students and educational content that can be used to recommend personalized sequences of lessons with the goal of helping students prepare for specific assessments. Akin to collaborative filtering for recommender systems, the algorithm does not require students or content to be described by features, but it learns a representation using access traces. We formulate this problem as a regularized maximum-likelihood embedding of students, lessons, and assessments from historical student-content interactions. Empirical findings on large-scale data from Knewton, an adaptive learning technology company, show that this approach predicts assessment results competitively with benchmark models and is able to discriminate between lesson sequences that lead to mastery and failure.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":"108 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80935538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When undergraduate students are allowed to choose a time slot in which to take an exam from a large number of options (e.g., 40), the students exhibit strong preferences among the times. We found that students can be effectively modelled using constrained discrete choice theory to quantify these preferences from their observed behavior. The resulting models are suitable for load balancing when scheduling multiple concurrent exams and for capacity planning given a set schedule.
{"title":"Modeling Student Scheduling Preferences in a Computer-Based Testing Facility","authors":"Matthew West, C. Zilles","doi":"10.1145/2876034.2893441","DOIUrl":"https://doi.org/10.1145/2876034.2893441","url":null,"abstract":"When undergraduate students are allowed to choose a time slot in which to take an exam from a large number of options (e.g., 40), the students exhibit strong preferences among the times. We found that students can be effectively modelled using constrained discrete choice theory to quantify these preferences from their observed behavior. The resulting models are suitable for load balancing when scheduling multiple concurrent exams and for capacity planning given a set schedule.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76360011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Roussev, P. Simakov, J. Orr, Amit Deutsch, John Cox, Michael Lenaghan, Mike Gainer
In this paper, we present a new set of features introduced in Course Builder that allow instructors to add skill maps to their courses. We show how skill maps can be used to provide up-to-date and actionable information on students' learning behavior and performance.
{"title":"Course Builder Skill Maps","authors":"B. Roussev, P. Simakov, J. Orr, Amit Deutsch, John Cox, Michael Lenaghan, Mike Gainer","doi":"10.1145/2876034.2893374","DOIUrl":"https://doi.org/10.1145/2876034.2893374","url":null,"abstract":"In this paper, we present a new set of features introduced in Course Builder that allow instructors to add skill maps to their courses. We show how skill maps can be used to provide up-to-date and actionable information on students' learning behavior and performance.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72996437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vitomir Kovanovíc, Srécko Joksimovíc, D. Gašević, James Owers, Anne-Marie Scott, A. Woodgate
Massive Open Online Courses represent a fertile ground for examining student behavior. However, due to their openness MOOC attract a diverse body of students, for the most part, unknown to the course instructors. However, a certain number of students enroll in the same course multiple times, and there are records of their previous learning activities which might provide some useful information to course organizers before the start of the course. In this study, we examined how student behavior changes between subsequent course offerings. We identified profiles of returning students and also interesting changes in their behavior between two enrollments to the same course. Results and their implications are further discussed.
{"title":"Profiling MOOC Course Returners: How Does Student Behavior Change Between Two Course Enrollments?","authors":"Vitomir Kovanovíc, Srécko Joksimovíc, D. Gašević, James Owers, Anne-Marie Scott, A. Woodgate","doi":"10.1145/2876034.2893431","DOIUrl":"https://doi.org/10.1145/2876034.2893431","url":null,"abstract":"Massive Open Online Courses represent a fertile ground for examining student behavior. However, due to their openness MOOC attract a diverse body of students, for the most part, unknown to the course instructors. However, a certain number of students enroll in the same course multiple times, and there are records of their previous learning activities which might provide some useful information to course organizers before the start of the course. In this study, we examined how student behavior changes between subsequent course offerings. We identified profiles of returning students and also interesting changes in their behavior between two enrollments to the same course. Results and their implications are further discussed.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":"103 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73847391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Williams, Juho Kim, Anna N. Rafferty, Samuel G. Maldonado, Krzysztof Z Gajos, Walter S. Lasecki, Neil Heffernan
While explanations may help people learn by providing information about why an answer is correct, many problems on online platforms lack high-quality explanations. This paper presents AXIS (Adaptive eXplanation Improvement System), a system for obtaining explanations. AXIS asks learners to generate, revise, and evaluate explanations as they solve a problem, and then uses machine learning to dynamically determine which explanation to present to a future learner, based on previous learners' collective input. Results from a case study deployment and a randomized experiment demonstrate that AXIS elicits and identifies explanations that learners find helpful. Providing explanations from AXIS also objectively enhanced learning, when compared to the default practice where learners solved problems and received answers without explanations. The rated quality and learning benefit of AXIS explanations did not differ from explanations generated by an experienced instructor.
{"title":"AXIS","authors":"J. Williams, Juho Kim, Anna N. Rafferty, Samuel G. Maldonado, Krzysztof Z Gajos, Walter S. Lasecki, Neil Heffernan","doi":"10.1145/2876034.2876042","DOIUrl":"https://doi.org/10.1145/2876034.2876042","url":null,"abstract":"While explanations may help people learn by providing information about why an answer is correct, many problems on online platforms lack high-quality explanations. This paper presents AXIS (Adaptive eXplanation Improvement System), a system for obtaining explanations. AXIS asks learners to generate, revise, and evaluate explanations as they solve a problem, and then uses machine learning to dynamically determine which explanation to present to a future learner, based on previous learners' collective input. Results from a case study deployment and a randomized experiment demonstrate that AXIS elicits and identifies explanations that learners find helpful. Providing explanations from AXIS also objectively enhanced learning, when compared to the default practice where learners solved problems and received answers without explanations. The rated quality and learning benefit of AXIS explanations did not differ from explanations generated by an experienced instructor.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74443717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the past four years The Open University has published annual Innovating Pedagogy reports. Our aim has been to shift the focus of horizon scanning for education away from novel technologies towards new forms of teaching, learning and assessment for an interactive world, to guide teachers and policy makers in productive innovation. In the most recent report, from over thirty pedagogies, ranging from bricolage to stealth assessment, we have identified six overarching themes, of scale, connectivity, reflection, extension, embodiment, and personalisation [8]. Delivering education at massive scale has been the headline innovation of the past four years. This success begs the question of "which pedagogies can work successfully at scale?". Sports coaching is an example of teaching that does not scale. It involves monitoring and diagnosis of an individual's performance, based on holistic observation of body movements, followed by personal tutoring and posture adjustments. Any of these elements might be deployed at scale (for example, diagnostic learning analytics [10], or AI-based personal tutoring [4] but in combination they require the physical presence of a human coach. The major xMOOC platforms were initially based on an instructivist pedagogy of a repeated cycle of inform and test. This has the benefit of being relatively impervious to scale. A lecture can be presented to 200 students in a theatre or to 20,000 viewers online with similar impact. Delivered on personal computers, instructivist pedagogy offers elements of personalisation, by providing adaptive feedback on quiz answers and alternative routes through the content.
{"title":"Effective Pedagogy at Scale: Social Learning and Citizen Inquiry","authors":"M. Sharples","doi":"10.1145/2876034.2896321","DOIUrl":"https://doi.org/10.1145/2876034.2896321","url":null,"abstract":"For the past four years The Open University has published annual Innovating Pedagogy reports. Our aim has been to shift the focus of horizon scanning for education away from novel technologies towards new forms of teaching, learning and assessment for an interactive world, to guide teachers and policy makers in productive innovation. In the most recent report, from over thirty pedagogies, ranging from bricolage to stealth assessment, we have identified six overarching themes, of scale, connectivity, reflection, extension, embodiment, and personalisation [8]. Delivering education at massive scale has been the headline innovation of the past four years. This success begs the question of \"which pedagogies can work successfully at scale?\". Sports coaching is an example of teaching that does not scale. It involves monitoring and diagnosis of an individual's performance, based on holistic observation of body movements, followed by personal tutoring and posture adjustments. Any of these elements might be deployed at scale (for example, diagnostic learning analytics [10], or AI-based personal tutoring [4] but in combination they require the physical presence of a human coach. The major xMOOC platforms were initially based on an instructivist pedagogy of a repeated cycle of inform and test. This has the benefit of being relatively impervious to scale. A lecture can be presented to 200 students in a theatre or to 20,000 viewers online with similar impact. Delivered on personal computers, instructivist pedagogy offers elements of personalisation, by providing adaptive feedback on quiz answers and alternative routes through the content.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89828192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study proposes a standardised open framework to automatically generate and label discussion topics from Massive Open Online Courses (MOOCs). The proposed framework expects to overcome the issues experienced by MOOC participants and teaching staff in locating and navigating their information needs effectively. We analysed two MOOCs -- Machine Learning and Statistics: Making Sense of Data offered during 2013 and obtained statistically significant results for automated topic labeling. However, more experiments with additional MOOCs from different MOOC platforms are necessary to generalise our findings.
{"title":"A Framework for Topic Generation and Labeling from MOOC Discussions","authors":"Thushari Atapattu, K. Falkner","doi":"10.1145/2876034.2893414","DOIUrl":"https://doi.org/10.1145/2876034.2893414","url":null,"abstract":"This study proposes a standardised open framework to automatically generate and label discussion topics from Massive Open Online Courses (MOOCs). The proposed framework expects to overcome the issues experienced by MOOC participants and teaching staff in locating and navigating their information needs effectively. We analysed two MOOCs -- Machine Learning and Statistics: Making Sense of Data offered during 2013 and obtained statistically significant results for automated topic labeling. However, more experiments with additional MOOCs from different MOOC platforms are necessary to generalise our findings.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90153220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacqueline L. Feild, N. Lewkow, N. Zimmerman, M. Riedesel, Alfred Essa
In this paper, we describe a scalable learning analytics platform which runs generalized analytics models on educational data in parallel. As a proof of concept, we use this platform as a base for an end-to-end automated writing feedback system. The system allows students to view feedback on their writing in near real-time, edit their writing based on the feedback provided, and observe the progression of their performance over time. Providing students with detailed feedback is an important part of improving writing skills and an essential component towards solving Bloom's "two sigma" problem in education. We evaluate the effectiveness of the feedback for students with an ongoing pilot study with 800 students who are using the learning analytics platform in a college English course.
{"title":"A Scalable Learning Analytics Platform for Automated Writing Feedback","authors":"Jacqueline L. Feild, N. Lewkow, N. Zimmerman, M. Riedesel, Alfred Essa","doi":"10.1145/2876034.2893380","DOIUrl":"https://doi.org/10.1145/2876034.2893380","url":null,"abstract":"In this paper, we describe a scalable learning analytics platform which runs generalized analytics models on educational data in parallel. As a proof of concept, we use this platform as a base for an end-to-end automated writing feedback system. The system allows students to view feedback on their writing in near real-time, edit their writing based on the feedback provided, and observe the progression of their performance over time. Providing students with detailed feedback is an important part of improving writing skills and an essential component towards solving Bloom's \"two sigma\" problem in education. We evaluate the effectiveness of the feedback for students with an ongoing pilot study with 800 students who are using the learning analytics platform in a college English course.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88882684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David A. Joyner, W. Ashby, Liam Irish, Yeeling Lam, Jacob Langson, Isabel Lupiani, Mike Lustig, Paige Pettoruto, Dana Sheahen, Angela Smiley, A. Bruckman, Ashok K. Goel
Large classes, both online and residential, typically demand many graders for evaluating students' written work. Some classes attempt to use autograding or peer grading, but these both present challenges to assigning grades at for-credit institutions, such as the difficulty of autograding to evaluate free-response answers and the lack of expert oversight in peer grading. In a large, online class at Georgia Tech in Summer 2015, we experimented with a new approach to grading: framing graders as meta-reviewers, charged with evaluating the original work in the context of peer reviews. To evaluate this approach, we conducted a pair of controlled experiments and a handful of qualitative analyses. We found that having access to peer reviews improves the perceived quality of feedback provided by graders without decreasing the graders' efficiency and with only a small influence on the grades assigned.
{"title":"Graders as Meta-Reviewers: Simultaneously Scaling and Improving Expert Evaluation for Large Online Classrooms","authors":"David A. Joyner, W. Ashby, Liam Irish, Yeeling Lam, Jacob Langson, Isabel Lupiani, Mike Lustig, Paige Pettoruto, Dana Sheahen, Angela Smiley, A. Bruckman, Ashok K. Goel","doi":"10.1145/2876034.2876044","DOIUrl":"https://doi.org/10.1145/2876034.2876044","url":null,"abstract":"Large classes, both online and residential, typically demand many graders for evaluating students' written work. Some classes attempt to use autograding or peer grading, but these both present challenges to assigning grades at for-credit institutions, such as the difficulty of autograding to evaluate free-response answers and the lack of expert oversight in peer grading. In a large, online class at Georgia Tech in Summer 2015, we experimented with a new approach to grading: framing graders as meta-reviewers, charged with evaluating the original work in the context of peer reviews. To evaluate this approach, we conducted a pair of controlled experiments and a handful of qualitative analyses. We found that having access to peer reviews improves the perceived quality of feedback provided by graders without decreasing the graders' efficiency and with only a small influence on the grades assigned.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":"45 2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82730133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An interactive demonstration on how to design and implement randomized controlled experiments at scale within the ASSISTments TestBed, a new collaborative for educational research funded by the National Science Foundation (NSF). The Assessment of Learning infrastructure (ALI), a unique data retrieval and analysis tool, is also demonstrated.
{"title":"Studying Learning at Scale with the ASSISTments TestBed","authors":"Korinn S. Ostrow, N. Heffernan","doi":"10.1145/2876034.2893404","DOIUrl":"https://doi.org/10.1145/2876034.2893404","url":null,"abstract":"An interactive demonstration on how to design and implement randomized controlled experiments at scale within the ASSISTments TestBed, a new collaborative for educational research funded by the National Science Foundation (NSF). The Assessment of Learning infrastructure (ALI), a unique data retrieval and analysis tool, is also demonstrated.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80402180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}