How do we design technology for learning at scale? Based on an examination of a large number of influential systems for learning at scale, I argue that designing for scale is not an amorphous design undertaking. Instead, it builds on two distinct perspectives on scale. Systems built with a scaling through efficiency perspective make learning more efficient and allow the same number of instructors to help a much larger set of learners. Systems with a scaling through empowerment perspective empower a larger number of people to assist learners effectively. I outline how these simple differences in design perspective lead to large differences in design concerns, techniques, and evaluation criteria. Articulating prevalent design perspectives should make overlooked design opportunities more salient, help systems designers design for scale more deliberately and understand their tradeoffs, and open up new opportunities to designers who shift their perspectives.
{"title":"Design Perspectives of Learning at Scale: Scaling Efficiency and Empowerment","authors":"Chinmay Kulkarni","doi":"10.1145/3330430.3333620","DOIUrl":"https://doi.org/10.1145/3330430.3333620","url":null,"abstract":"How do we design technology for learning at scale? Based on an examination of a large number of influential systems for learning at scale, I argue that designing for scale is not an amorphous design undertaking. Instead, it builds on two distinct perspectives on scale. Systems built with a scaling through efficiency perspective make learning more efficient and allow the same number of instructors to help a much larger set of learners. Systems with a scaling through empowerment perspective empower a larger number of people to assist learners effectively. I outline how these simple differences in design perspective lead to large differences in design concerns, techniques, and evaluation criteria. Articulating prevalent design perspectives should make overlooked design opportunities more salient, help systems designers design for scale more deliberately and understand their tradeoffs, and open up new opportunities to designers who shift their perspectives.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78875806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To help language learners achieve fluency, instructors often focus on teaching formulaic sequences (FS)--phrases such as idioms or phrasal verbs that are processed, stored, and retrieved holistically. Teaching FS effectively is challenging as it heavily involves instructors' intuition, prior knowledge, and manual efforts to identify a set of FSs with high utility. In this paper, we present FSIST, a tool that supports instructors for video-based instruction of FS. The core idea of FSIST is to utilize videos at scale to build a list of FSs along with videos that include example usages. To evaluate how FSIST can effectively support instructors, we conducted a user study with three English instructors. Results show that the browsing interactions provided in FSIST support instructors to efficiently find parts of videos that show example usages of FSs.-
{"title":"Supporting Instruction of Formulaic Sequences Using Videos at Scale","authors":"K. Jo, Hyeonggeun Yun, Juho Kim","doi":"10.1145/3330430.3333671","DOIUrl":"https://doi.org/10.1145/3330430.3333671","url":null,"abstract":"To help language learners achieve fluency, instructors often focus on teaching formulaic sequences (FS)--phrases such as idioms or phrasal verbs that are processed, stored, and retrieved holistically. Teaching FS effectively is challenging as it heavily involves instructors' intuition, prior knowledge, and manual efforts to identify a set of FSs with high utility. In this paper, we present FSIST, a tool that supports instructors for video-based instruction of FS. The core idea of FSIST is to utilize videos at scale to build a list of FSs along with videos that include example usages. To evaluate how FSIST can effectively support instructors, we conducted a user study with three English instructors. Results show that the browsing interactions provided in FSIST support instructors to efficiently find parts of videos that show example usages of FSs.-","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"71 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78712793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Ruan, A. Willis, Qianyao Xu, Glenn M. Davis, Liwei Jiang, E. Brunskill, J. Landay
Digitization of education has brought a tremendous amount of online materials that are potentially useful for language learners to practice their reading skills. However, these digital materials rarely help with conversational practice, a key component of foreign language learning. Leveraging recent advances in chatbot technologies, we developed BookBuddy, a scalable virtual reading companion that can turn any reading material into an interactive conversation-based English lesson. We piloted our virtual tutor with five 6-year-old native Chinese-speaking children currently learning English. Preliminary results suggest that children enjoyed speaking English with our virtual tutoring chatbot and were highly engaged during the interaction.
{"title":"BookBuddy","authors":"S. Ruan, A. Willis, Qianyao Xu, Glenn M. Davis, Liwei Jiang, E. Brunskill, J. Landay","doi":"10.1145/3330430.3333643","DOIUrl":"https://doi.org/10.1145/3330430.3333643","url":null,"abstract":"Digitization of education has brought a tremendous amount of online materials that are potentially useful for language learners to practice their reading skills. However, these digital materials rarely help with conversational practice, a key component of foreign language learning. Leveraging recent advances in chatbot technologies, we developed BookBuddy, a scalable virtual reading companion that can turn any reading material into an interactive conversation-based English lesson. We piloted our virtual tutor with five 6-year-old native Chinese-speaking children currently learning English. Preliminary results suggest that children enjoyed speaking English with our virtual tutoring chatbot and were highly engaged during the interaction.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76309426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Willis, G. M. Davis, S. Ruan, L. Manoharan, J. Landay, E. Brunskill
Automatic question generation is a promising tool for developing the learning systems of the future. Research in this area has mostly relied on having answers (key phrases) identified beforehand and given as a feature, which is not practical for real-world, scalable applications of question generation. We describe and implement an end-to-end neural question generation system that generates question and answer pairs given a context paragraph only. We accomplish this by first generating answer candidates (key phrases) from the paragraph context, and then generating questions using the key phrases. We evaluate our method of key phrase extraction by comparing our output over the same paragraphs with question-answer pairs generated by crowdworkers and by educational experts. Results demonstrate that our system is able to generate educationally meaningful question and answer pairs with only context paragraphs as input, significantly increasing the potential scalability of automatic question generation.
{"title":"Key Phrase Extraction for Generating Educational Question-Answer Pairs","authors":"A. Willis, G. M. Davis, S. Ruan, L. Manoharan, J. Landay, E. Brunskill","doi":"10.1145/3330430.3333636","DOIUrl":"https://doi.org/10.1145/3330430.3333636","url":null,"abstract":"Automatic question generation is a promising tool for developing the learning systems of the future. Research in this area has mostly relied on having answers (key phrases) identified beforehand and given as a feature, which is not practical for real-world, scalable applications of question generation. We describe and implement an end-to-end neural question generation system that generates question and answer pairs given a context paragraph only. We accomplish this by first generating answer candidates (key phrases) from the paragraph context, and then generating questions using the key phrases. We evaluate our method of key phrase extraction by comparing our output over the same paragraphs with question-answer pairs generated by crowdworkers and by educational experts. Results demonstrate that our system is able to generate educationally meaningful question and answer pairs with only context paragraphs as input, significantly increasing the potential scalability of automatic question generation.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"134 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80172303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Test-yourself questions are effective examples of formative assessment, and have been shown to promote learners' active interaction with materials and knowledge mastery through frequent practice. However, the cost of developing and implementing engaging test-yourself activities can be problematic in large-scale web-based learning environments; a lack of built-in scaffolding to guide learners is also a challenge. We introduce Guided-KNOWLA, an improvement of KNOWLA -- a learning tool has learners assemble a given set of mixed-size scrambled fragments into a logical order using a web-based interface, accompanied by motivational step-by-step hint/guidance as enhancements. We conducted an exploratory study with graduate learners to examine their attitudes toward Guided-KNOWLA activities, measured by perceived usefulness and comparative formats for formative assessment. Preliminary results suggest that using the Guided-KNOWLA were useful in helping learners master online materials and were a preferred format of "test-yourself" practice to multiple-choice questions.
{"title":"Guided-KNOWLA: The Use of Guided Unscrambling to Enhance Active Online Learning","authors":"Eric J. Braude, Ye Liu","doi":"10.1145/3330430.3333655","DOIUrl":"https://doi.org/10.1145/3330430.3333655","url":null,"abstract":"Test-yourself questions are effective examples of formative assessment, and have been shown to promote learners' active interaction with materials and knowledge mastery through frequent practice. However, the cost of developing and implementing engaging test-yourself activities can be problematic in large-scale web-based learning environments; a lack of built-in scaffolding to guide learners is also a challenge. We introduce Guided-KNOWLA, an improvement of KNOWLA -- a learning tool has learners assemble a given set of mixed-size scrambled fragments into a logical order using a web-based interface, accompanied by motivational step-by-step hint/guidance as enhancements. We conducted an exploratory study with graduate learners to examine their attitudes toward Guided-KNOWLA activities, measured by perceived usefulness and comparative formats for formative assessment. Preliminary results suggest that using the Guided-KNOWLA were useful in helping learners master online materials and were a preferred format of \"test-yourself\" practice to multiple-choice questions.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81949882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peer advising in education, which involves students providing fellow students with course advice, can be important in online student communities and can provide insights into potential course improvements. We examine reviews from a course review web site for online graduate programs. We develop a coding scheme to analyze the free text portion of the reviews and integrate those findings with students' quantitative ratings of each course's overall score, difficulty, and workload. While reviews focus on subjective evaluation of courses, students also provide feedback for instructors, personal context, advice for other students, and objective course descriptions. Additionally, the average review varies by course overall score, difficulty, and workload. Our research examines the importance of student communities in online education and peer advising at scale.
{"title":"Peer Advising at Scale: Content and Context of a Learner-Owned Course Evaluation System","authors":"Alex Duncan, David A. Joyner","doi":"10.1145/3330430.3333660","DOIUrl":"https://doi.org/10.1145/3330430.3333660","url":null,"abstract":"Peer advising in education, which involves students providing fellow students with course advice, can be important in online student communities and can provide insights into potential course improvements. We examine reviews from a course review web site for online graduate programs. We develop a coding scheme to analyze the free text portion of the reviews and integrate those findings with students' quantitative ratings of each course's overall score, difficulty, and workload. While reviews focus on subjective evaluation of courses, students also provide feedback for instructors, personal context, advice for other students, and objective course descriptions. Additionally, the average review varies by course overall score, difficulty, and workload. Our research examines the importance of student communities in online education and peer advising at scale.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83561309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xu Wang, Srinivasa Teja Talluri, C. Rosé, K. Koedinger
In schools and colleges around the world, open-ended home-work assignments are commonly used. However, such assignments require substantial instructor effort for grading, and tend not to support opportunities for repeated practice. We propose UpGrade, a novel learnersourcing approach that generates scalable learning opportunities using prior student solutions to open-ended problems. UpGrade creates interactive questions that offer automated and real-time feedback, while enabling repeated practice. In a two-week experiment in a college-level HCI course, students answering UpGrade-created questions instead of traditional open-ended assignments achieved indistinguishable learning outcomes in ~30% less time. Further, no manual grading effort is required. To enhance quality control, UpGrade incorporates a psychometric approach using crowd workers' answers to automatically prune out low quality questions, resulting in a question bank that exceeds reliability standards for classroom use.
{"title":"UpGrade: Sourcing Student Open-Ended Solutions to Create Scalable Learning Opportunities","authors":"Xu Wang, Srinivasa Teja Talluri, C. Rosé, K. Koedinger","doi":"10.1145/3330430.3333614","DOIUrl":"https://doi.org/10.1145/3330430.3333614","url":null,"abstract":"In schools and colleges around the world, open-ended home-work assignments are commonly used. However, such assignments require substantial instructor effort for grading, and tend not to support opportunities for repeated practice. We propose UpGrade, a novel learnersourcing approach that generates scalable learning opportunities using prior student solutions to open-ended problems. UpGrade creates interactive questions that offer automated and real-time feedback, while enabling repeated practice. In a two-week experiment in a college-level HCI course, students answering UpGrade-created questions instead of traditional open-ended assignments achieved indistinguishable learning outcomes in ~30% less time. Further, no manual grading effort is required. To enhance quality control, UpGrade incorporates a psychometric approach using crowd workers' answers to automatically prune out low quality questions, resulting in a question bank that exceeds reliability standards for classroom use.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87184441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua Littenberg-Tobias, José A. Ruipérez Valiente, J. Reich
The relationship between pricing and learning behavior is an increasingly important topic in MOOC (massive open online course) research. We report on two case studies where cohorts of learners were offered coupons for free-certificates to explore price reductions might influence user behavior in MOOC-based online learning settings. In Case Study #1, we compare participation and certification rates between courses with and without coupons for free-certificates. In the courses with a free-certificate track, participants signed up for the verified certificate track at higher rates and completion rates among verified students were higher than in the paid-certificate track courses. In Case Study #2, we compare the behaviors of learners within the same courses based on whether they received access to a free-certificate track. Access to free-certificates was associated with somewhat lower certification rates, but overall certification rates remained high particularly among those who viewed the courses. These findings suggests that some other incentives, other than simply the sunk-cost of paying for a verified certificate-track, may motivate learners to complete MOOC courses.
{"title":"Impact of Free-Certificate Coupons on Learner Behavior in Online Courses: Results from Two Case Studies","authors":"Joshua Littenberg-Tobias, José A. Ruipérez Valiente, J. Reich","doi":"10.1145/3330430.3333654","DOIUrl":"https://doi.org/10.1145/3330430.3333654","url":null,"abstract":"The relationship between pricing and learning behavior is an increasingly important topic in MOOC (massive open online course) research. We report on two case studies where cohorts of learners were offered coupons for free-certificates to explore price reductions might influence user behavior in MOOC-based online learning settings. In Case Study #1, we compare participation and certification rates between courses with and without coupons for free-certificates. In the courses with a free-certificate track, participants signed up for the verified certificate track at higher rates and completion rates among verified students were higher than in the paid-certificate track courses. In Case Study #2, we compare the behaviors of learners within the same courses based on whether they received access to a free-certificate track. Access to free-certificates was associated with somewhat lower certification rates, but overall certification rates remained high particularly among those who viewed the courses. These findings suggests that some other incentives, other than simply the sunk-cost of paying for a verified certificate-track, may motivate learners to complete MOOC courses.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"295 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77119925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inma Borrella, Sergio Caballero-Caballero, Eva Ponce-Cueto
Massive Open Online Courses (MOOCs) are an efficient way of delivering knowledge to thousands of learners. However, even among learners who show a clear intention to complete a MOOC, the dropout rate is substantial. This is particularly relevant in the context of MOOC-based educational programs where a funnel of participation can be observed and high dropout rates at early stages of the program significantly reduce the number of learners successfully completing it. In this paper, we propose an approach to identify learners at risk of dropping out from a course, and we design and test an intervention intended to mitigate that risk. We collect course clickstream data from MOOCs of the MITx MicroMasters® in Supply Chain Management program and apply machine learning algorithms to predict potential dropouts. Our final model is able to predict 80% of actual dropouts. Based on these results, we design an intervention aimed to increase learners' motivation and engagement with a MOOC. The intervention consists on sending tailored encouragement emails to at-risk learners, but despite the high email opening rate, it shows no effect in dropout reduction.
{"title":"Predict and Intervene: Addressing the Dropout Problem in a MOOC-based Program","authors":"Inma Borrella, Sergio Caballero-Caballero, Eva Ponce-Cueto","doi":"10.1145/3330430.3333634","DOIUrl":"https://doi.org/10.1145/3330430.3333634","url":null,"abstract":"Massive Open Online Courses (MOOCs) are an efficient way of delivering knowledge to thousands of learners. However, even among learners who show a clear intention to complete a MOOC, the dropout rate is substantial. This is particularly relevant in the context of MOOC-based educational programs where a funnel of participation can be observed and high dropout rates at early stages of the program significantly reduce the number of learners successfully completing it. In this paper, we propose an approach to identify learners at risk of dropping out from a course, and we design and test an intervention intended to mitigate that risk. We collect course clickstream data from MOOCs of the MITx MicroMasters® in Supply Chain Management program and apply machine learning algorithms to predict potential dropouts. Our final model is able to predict 80% of actual dropouts. Based on these results, we design an intervention aimed to increase learners' motivation and engagement with a MOOC. The intervention consists on sending tailored encouragement emails to at-risk learners, but despite the high email opening rate, it shows no effect in dropout reduction.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76560597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MOOCs (Massive Open Online Courses) frequently use grades to calculate whether a student passes the course. To better understand how student behavior is influenced by grade feedback, we conduct a study on the changes of certified students' behavior before and after they have received their grade. We use observational student data from two MITx MOOCs to examine student behavior before and after a grade is released and calculate the difference (the delta-activity). We then analyze the changes in the delta-activity distributions across all graded assignments a we observe that the variation in delta-activity decreases as grade decreases, with students who have the lowest grade exhibiting little or no change in weekly activity. This trend persists throughout each course, in all course offerings, suggesting that a change in grade does not correlate with a change in the behavior of certified MOOC students.
{"title":"On the Influence of Grades on Learning Behavior of Students in MOOCs","authors":"Li Wang, Erik Hemberg, Una-May O’Reilly","doi":"10.1145/3330430.3333652","DOIUrl":"https://doi.org/10.1145/3330430.3333652","url":null,"abstract":"MOOCs (Massive Open Online Courses) frequently use grades to calculate whether a student passes the course. To better understand how student behavior is influenced by grade feedback, we conduct a study on the changes of certified students' behavior before and after they have received their grade. We use observational student data from two MITx MOOCs to examine student behavior before and after a grade is released and calculate the difference (the delta-activity). We then analyze the changes in the delta-activity distributions across all graded assignments a we observe that the variation in delta-activity decreases as grade decreases, with students who have the lowest grade exhibiting little or no change in weekly activity. This trend persists throughout each course, in all course offerings, suggesting that a change in grade does not correlate with a change in the behavior of certified MOOC students.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"66 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76640051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}