Learning technologies are generating a vast quantity of data every day. This data is often presented to students through learning analytics dashboards (LADs) with a goal of improving learners' self-regulated learning. However, are students actually using these dashboards, and do they perceive that using dashboards lead to any changes in their behavior? In this paper we report on the development and implementation of several dashboard views, which we call My Learning Analytics (MyLA). This study found that students thought using the dashboard would have more of an effect on the way they planned their course activity at pre-use (after a demo) than post use. Low self-regulated learners believed so significantly less post-use and used the grade distribution view the least. Students made several suggestions for ways to improve the grade distribution view and rated MyLA's usability more positively at pre- than post-use. Given the low use and low perceived impact of the current dashboard, we suggest that researchers use participatory design to illicit students' needs and better incorporate student suggestions.
{"title":"The Role of Self-Regulated Learning in the Design, Implementation, and Evaluation of Learning Analytics Dashboards","authors":"Carl C. Haynes","doi":"10.1145/3386527.3406732","DOIUrl":"https://doi.org/10.1145/3386527.3406732","url":null,"abstract":"Learning technologies are generating a vast quantity of data every day. This data is often presented to students through learning analytics dashboards (LADs) with a goal of improving learners' self-regulated learning. However, are students actually using these dashboards, and do they perceive that using dashboards lead to any changes in their behavior? In this paper we report on the development and implementation of several dashboard views, which we call My Learning Analytics (MyLA). This study found that students thought using the dashboard would have more of an effect on the way they planned their course activity at pre-use (after a demo) than post use. Low self-regulated learners believed so significantly less post-use and used the grade distribution view the least. Students made several suggestions for ways to improve the grade distribution view and rated MyLA's usability more positively at pre- than post-use. Given the low use and low perceived impact of the current dashboard, we suggest that researchers use participatory design to illicit students' needs and better incorporate student suggestions.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89186373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The late 2000s and 2010s saw the full arc of a dramatic hype cycle in learning at scale, where charismatic technologists made bold and ultimately unfounded predictions about how technologies would disrupt schooling systems. Looking toward the 2020s, a more productive approach to learning at scale is the tinkerer's stance, one that emphasizes incremental improvements on the long history of learning at scale. This article offers two organizational constructs for navigating and building on that history. Classifying learning-at-scale technologies into three genres-instructor-guided, algorithm-guided, and peer-guided approaches-helps identify how emerging technologies build on prior efforts and throws into relief that which is genuinely new. Four as-yet intractable dilemmas-the curse of the familiar, the edtech Matthew effect, the trap of routine assessment, and the toxic power of data and experiments-offer a set of grand challenges that learning-at-scale tinkerers will need to tackle in order to see more dramatic improvements in school systems.
{"title":"Two Stances, Three Genres, and Four Intractable Dilemmas for the Future of Learning at Scale","authors":"J. Reich","doi":"10.1145/3386527.3405929","DOIUrl":"https://doi.org/10.1145/3386527.3405929","url":null,"abstract":"The late 2000s and 2010s saw the full arc of a dramatic hype cycle in learning at scale, where charismatic technologists made bold and ultimately unfounded predictions about how technologies would disrupt schooling systems. Looking toward the 2020s, a more productive approach to learning at scale is the tinkerer's stance, one that emphasizes incremental improvements on the long history of learning at scale. This article offers two organizational constructs for navigating and building on that history. Classifying learning-at-scale technologies into three genres-instructor-guided, algorithm-guided, and peer-guided approaches-helps identify how emerging technologies build on prior efforts and throws into relief that which is genuinely new. Four as-yet intractable dilemmas-the curse of the familiar, the edtech Matthew effect, the trap of routine assessment, and the toxic power of data and experiments-offer a set of grand challenges that learning-at-scale tinkerers will need to tackle in order to see more dramatic improvements in school systems.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84434210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinjin Zhao, Shreyansh P. Bhatt, Candace Thille, D. Zimmaro, Neelesh Gattani, Josh Walker
E-learning is becoming popular as it provides learners the flexibility, targeted resources across the internet, personalized guidance, and immediate feedback during learning. However, lack of social interaction, an indispensable component in developing some skills, has been a pain point in e-learning. We propose using Alexa, a voice-controlled Intelligent Personal Assistants (IPA), in e-learning to provide in-person practice to achieve some desired learning goals. With Alexa enabled learning experiences, learners are able to practice with other students (one role of Alexa) or receive immediate feedback from teachers (another role of Alexa) in an e-learning environment. We propose a configuration driven conversation engine, which can support instructional designers to create diverse in-person practice opportunities in e-learning. We demonstrate that learning designers can create an Alexa activity with a few configuration steps. We also share results on the effectiveness of an Alexa activity with formative assessment evaluation in real world applications.
{"title":"Introducing Alexa for E-learning","authors":"Jinjin Zhao, Shreyansh P. Bhatt, Candace Thille, D. Zimmaro, Neelesh Gattani, Josh Walker","doi":"10.1145/3386527.3406719","DOIUrl":"https://doi.org/10.1145/3386527.3406719","url":null,"abstract":"E-learning is becoming popular as it provides learners the flexibility, targeted resources across the internet, personalized guidance, and immediate feedback during learning. However, lack of social interaction, an indispensable component in developing some skills, has been a pain point in e-learning. We propose using Alexa, a voice-controlled Intelligent Personal Assistants (IPA), in e-learning to provide in-person practice to achieve some desired learning goals. With Alexa enabled learning experiences, learners are able to practice with other students (one role of Alexa) or receive immediate feedback from teachers (another role of Alexa) in an e-learning environment. We propose a configuration driven conversation engine, which can support instructional designers to create diverse in-person practice opportunities in e-learning. We demonstrate that learning designers can create an Alexa activity with a few configuration steps. We also share results on the effectiveness of an Alexa activity with formative assessment evaluation in real world applications.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88376759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Rosen, Kristin Stoeffler, M. Yudelson, V. Simmering
Collaborative problem solving (CPS) is an important competency for life and career success. Promoting the development of CPS skills requires robust CPS assessment. This paper describes a gamified stealth CPS assessment used within a collaborative inquiry science curriculum. A pilot deployment included 196 middle school students from multiple schools in the United States. Results showed the sample was balanced in terms of measured skill performance and completion time. Future directions include the extension to teacher authoring and the deployment of this gamified assessment approach to additional contexts, such as workforce training and credentialing in large-scale online courses.
{"title":"Towards Scalable Gamified Assessment in Support of Collaborative Problem-Solving Competency Development in Online and Blended Learning","authors":"Y. Rosen, Kristin Stoeffler, M. Yudelson, V. Simmering","doi":"10.1145/3386527.3405946","DOIUrl":"https://doi.org/10.1145/3386527.3405946","url":null,"abstract":"Collaborative problem solving (CPS) is an important competency for life and career success. Promoting the development of CPS skills requires robust CPS assessment. This paper describes a gamified stealth CPS assessment used within a collaborative inquiry science curriculum. A pilot deployment included 196 middle school students from multiple schools in the United States. Results showed the sample was balanced in terms of measured skill performance and completion time. Future directions include the extension to teacher authoring and the deployment of this gamified assessment approach to additional contexts, such as workforce training and credentialing in large-scale online courses.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89926558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an analysis of factors correlated to grading speed in short answer questions from college level STEM courses using a novel dataset collected by an online education company. By analyzing timestamp data, we were able to estimate how long instructors grade individual student responses, which we typically found to be less than 10 seconds. This dataset provides us with a unique opportunity to determine which steps in the grading workflow could benefit from intervention. We found that sorting responses by rubric similarity has the potential to drastically reduce grading time by up to 50% per response. We plan to follow this work by implementing an intelligent agent to present responses in a sorted order to minimize grading time.
{"title":"Analysis of Grading Times of Short Answer Questions","authors":"Michael Yen, Sergey Karayev, E. Wang","doi":"10.1145/3386527.3406748","DOIUrl":"https://doi.org/10.1145/3386527.3406748","url":null,"abstract":"We present an analysis of factors correlated to grading speed in short answer questions from college level STEM courses using a novel dataset collected by an online education company. By analyzing timestamp data, we were able to estimate how long instructors grade individual student responses, which we typically found to be less than 10 seconds. This dataset provides us with a unique opportunity to determine which steps in the grading workflow could benefit from intervention. We found that sorting responses by rubric similarity has the potential to drastically reduce grading time by up to 50% per response. We plan to follow this work by implementing an intelligent agent to present responses in a sorted order to minimize grading time.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80698268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Xie, Greg L. Nelson, Harshitha Akkaraju, William Kwok, Amy J. Ko
Choices learners make when navigating a self-directed online learning tool can impact the effectiveness of the experience. But these tools often do not afford learners the agency or the information to make decisions beneficial to their learning. We evaluated the effect of varying levels of information and agency in a self-directed environment designed to teach programming. We investigated three design alternatives: informed high-agency, informed low-agency, and less informed high-agency. To investigate the effect of these alternatives on learning, we conducted a study with 79 novice programmers. Our results indicated that increased agency and information may have translated to more motivation, but not improved learning. Qualitative results suggest this was due to the burden that agency and information placed on decision-making. We interpret our results in relation to informing the design of self-directed online tools for learner agency.
{"title":"The Effect of Informing Agency in Self-Directed Online Learning Environments","authors":"Benjamin Xie, Greg L. Nelson, Harshitha Akkaraju, William Kwok, Amy J. Ko","doi":"10.1145/3386527.3405928","DOIUrl":"https://doi.org/10.1145/3386527.3405928","url":null,"abstract":"Choices learners make when navigating a self-directed online learning tool can impact the effectiveness of the experience. But these tools often do not afford learners the agency or the information to make decisions beneficial to their learning. We evaluated the effect of varying levels of information and agency in a self-directed environment designed to teach programming. We investigated three design alternatives: informed high-agency, informed low-agency, and less informed high-agency. To investigate the effect of these alternatives on learning, we conducted a study with 79 novice programmers. Our results indicated that increased agency and information may have translated to more motivation, but not improved learning. Qualitative results suggest this was due to the burden that agency and information placed on decision-making. We interpret our results in relation to informing the design of self-directed online tools for learner agency.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"111 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78103241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. M. Davis, Abdallah A. AbuHashem, David Lang, M. Stevens
College courses are often organized into hierarchical sequences, with foundational courses recommended or required as prerequisites for other offerings. While the wisdom of particular sequences is usually ascertained on the basis of faculty experience or student peer networks, machine learning techniques and ubiquitous transcript data make it possible to systematically identify the courses that best predict subsequent high achievement across entire curricula and student populations. We demonstrate the utility of this approach by analyzing five years of course sequences and earned grades for 13,218 undergraduates enrolled in courses with substantial quantitative content at a private research university. Findings indicate that prior completion of specific courses is positively associated with success in subsequent target courses, and suggest that academic planning could be enhanced through scaled observation of the revealed benefits of course sequences.
{"title":"Identifying Preparatory Courses that Predict Student Success in Quantitative Subjects","authors":"G. M. Davis, Abdallah A. AbuHashem, David Lang, M. Stevens","doi":"10.1145/3386527.3406742","DOIUrl":"https://doi.org/10.1145/3386527.3406742","url":null,"abstract":"College courses are often organized into hierarchical sequences, with foundational courses recommended or required as prerequisites for other offerings. While the wisdom of particular sequences is usually ascertained on the basis of faculty experience or student peer networks, machine learning techniques and ubiquitous transcript data make it possible to systematically identify the courses that best predict subsequent high achievement across entire curricula and student populations. We demonstrate the utility of this approach by analyzing five years of course sequences and earned grades for 13,218 undergraduates enrolled in courses with substantial quantitative content at a private research university. Findings indicate that prior completion of specific courses is positively associated with success in subsequent target courses, and suggest that academic planning could be enhanced through scaled observation of the revealed benefits of course sequences.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74316001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teamwork and graded team assignments in MOOCs are still largely under-researched. Nevertheless, the topic is enormously important as the ability to work and solve problems in teams is becoming increasingly common in modern work environments. The paper at hand discusses the reliability of a system to detect free-riders in peer assessed team tasks.
{"title":"Have Your Tickets Ready! Impede Free Riding in Large Scale Team Assignments","authors":"T. Staubitz, H. Traifeh, S. Chujfi, C. Meinel","doi":"10.1145/3386527.3406744","DOIUrl":"https://doi.org/10.1145/3386527.3406744","url":null,"abstract":"Teamwork and graded team assignments in MOOCs are still largely under-researched. Nevertheless, the topic is enormously important as the ability to work and solve problems in teams is becoming increasingly common in modern work environments. The paper at hand discusses the reliability of a system to detect free-riders in peer assessed team tasks.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83089872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shreyansh P. Bhatt, Jinjin Zhao, Candace Thille, D. Zimmaro, Neelesh Gattani
Online learning systems with open navigation allow learners to select the next learning activity in order to achieve desired mastery. To help learners make an informed choice regarding the next learning activity, we propose to represent and communicate the learner's knowledge state as the average success rate in the course for each skill, rather than as the probability of correctly answering the next question. We first show that we can accurately estimate the proposed knowledge state. We then show that the proposed attention-based model to estimate the knowledge state requires fewer parameters, provides actionable information to the learners, and achieves equivalent or better accuracy compared to RNN (Recurrent Neural Network) based models.
{"title":"A Novel Approach for Knowledge State Representation and Prediction","authors":"Shreyansh P. Bhatt, Jinjin Zhao, Candace Thille, D. Zimmaro, Neelesh Gattani","doi":"10.1145/3386527.3406745","DOIUrl":"https://doi.org/10.1145/3386527.3406745","url":null,"abstract":"Online learning systems with open navigation allow learners to select the next learning activity in order to achieve desired mastery. To help learners make an informed choice regarding the next learning activity, we propose to represent and communicate the learner's knowledge state as the average success rate in the course for each skill, rather than as the probability of correctly answering the next question. We first show that we can accurately estimate the proposed knowledge state. We then show that the proposed attention-based model to estimate the knowledge state requires fewer parameters, provides actionable information to the learners, and achieves equivalent or better accuracy compared to RNN (Recurrent Neural Network) based models.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84521647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
International Large-Scale Assessments (ILSA) have a critical role in shaping education systems around the world. They impact local and national education policy and receive much attention in the media and the public discourse. However, the public has limited access to the results and cannot learn from them. Subsequently, the media might frame the results incorrectly. The transparency of ILSA is essential to the advancement of the public discourse. It requires easy access to data together with simple analysis tools. However, the complexity of ILSA makes it hard to understand and to analyze. Open PISA tries to deal with this challenge by developing a dashboard for the Program for International Student Assessment (PISA). It aims to guide users in the analysis of the dataset. This paper describes the dashboard design and insight based on collected users' responses. It hypothesizes that full transparency of the PISA dataset might be not achievable to the entire public. Further research is needed to evaluate how dataset analysis affects users' knowledge and opinions.
{"title":"Open PISA: Dashboard for Large Educational Dataset","authors":"Avner Kantor, S. Rafaeli","doi":"10.1145/3386527.3406721","DOIUrl":"https://doi.org/10.1145/3386527.3406721","url":null,"abstract":"International Large-Scale Assessments (ILSA) have a critical role in shaping education systems around the world. They impact local and national education policy and receive much attention in the media and the public discourse. However, the public has limited access to the results and cannot learn from them. Subsequently, the media might frame the results incorrectly. The transparency of ILSA is essential to the advancement of the public discourse. It requires easy access to data together with simple analysis tools. However, the complexity of ILSA makes it hard to understand and to analyze. Open PISA tries to deal with this challenge by developing a dashboard for the Program for International Student Assessment (PISA). It aims to guide users in the analysis of the dataset. This paper describes the dashboard design and insight based on collected users' responses. It hypothesizes that full transparency of the PISA dataset might be not achievable to the entire public. Further research is needed to evaluate how dataset analysis affects users' knowledge and opinions.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90471999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}