This paper presents two approaches using Simulated Annealing and a genetic algorithm to create optimal curricula. The method generates a customized course selection and schedule for individual students enrolled in a large online graduate program in computer science offered by a major public research institution in the United States.
{"title":"SAGA: Curricula Optimization","authors":"A. Lefranc, David A. Joyner","doi":"10.1145/3386527.3406737","DOIUrl":"https://doi.org/10.1145/3386527.3406737","url":null,"abstract":"This paper presents two approaches using Simulated Annealing and a genetic algorithm to create optimal curricula. The method generates a customized course selection and schedule for individual students enrolled in a large online graduate program in computer science offered by a major public research institution in the United States.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89392506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assigning a set of hypothesized knowledge components (KCs) to assessment items within an ed-tech system enables us to better estimate student learning. However, creating and assigning these KCs is a time-consuming process that often requires domain expertise. In this study, we present the results of crowdsourcing KCs for problems in the domain of mathematics and English writing, as a first step in leveraging the crowd to expedite this task. Crowdworkers were presented with a problem and asked to provide the underlying skills required to solve it. Additionally, we investigated the effect of priming crowdworkers with related content before having them generate these KCs. We then analyzed their contributions through qualitative coding and found that across both the math and writing domains roughly 33% of the crowdsourced KCs directly matched those generated by domain experts for the same problems.
{"title":"Towards Crowdsourcing the Identification of Knowledge Components","authors":"Steven Moore, Huy A. Nguyen, John C. Stamper","doi":"10.1145/3386527.3405940","DOIUrl":"https://doi.org/10.1145/3386527.3405940","url":null,"abstract":"Assigning a set of hypothesized knowledge components (KCs) to assessment items within an ed-tech system enables us to better estimate student learning. However, creating and assigning these KCs is a time-consuming process that often requires domain expertise. In this study, we present the results of crowdsourcing KCs for problems in the domain of mathematics and English writing, as a first step in leveraging the crowd to expedite this task. Crowdworkers were presented with a problem and asked to provide the underlying skills required to solve it. Additionally, we investigated the effect of priming crowdworkers with related content before having them generate these KCs. We then analyzed their contributions through qualitative coding and found that across both the math and writing domains roughly 33% of the crowdsourced KCs directly matched those generated by domain experts for the same problems.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76698629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teaching robotics is an attractive way of motivating students to learn computer science. However, it is also a challenging topic for students of all ages and only one teacher in a classroom is too little to support approximately 30 students at the same time. Therefore, intelligent tutoring systems might be a meaningful way to support students and teachers. In this paper we describe an approach to support computer science lessons in secondary schools by using a learner model. We are explaining how the three phases of our learner model (data collection - profile construction - profile application) can be implemented for teaching robotics by using different types of implicit and explicit data to generate feedback for the teacher concerning competencies and knowledge of the students on the one hand and by supporting collaboration and group formation amongst the students on the other hand. The model is derived from literature and supported by data from different studies.
{"title":"An Evidence-Based Learner Model for Supporting Activities in Robotics","authors":"S. Schulz, Andreas Lingnau","doi":"10.1145/3386527.3406760","DOIUrl":"https://doi.org/10.1145/3386527.3406760","url":null,"abstract":"Teaching robotics is an attractive way of motivating students to learn computer science. However, it is also a challenging topic for students of all ages and only one teacher in a classroom is too little to support approximately 30 students at the same time. Therefore, intelligent tutoring systems might be a meaningful way to support students and teachers. In this paper we describe an approach to support computer science lessons in secondary schools by using a learner model. We are explaining how the three phases of our learner model (data collection - profile construction - profile application) can be implemented for teaching robotics by using different types of implicit and explicit data to generate feedback for the teacher concerning competencies and knowledge of the students on the one hand and by supporting collaboration and group formation amongst the students on the other hand. The model is derived from literature and supported by data from different studies.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"80 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81946317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper explores the use of data analytics for identifying creativity in visual programming. Visual programming environments are increasingly included in the schools curriculum. Their potential for promoting creative thinking in students is an important factor in their adoption. However, there does not exist a standard approach for detecting creativity in students' programming behavior, and analyzing programs manually requires human expertise and is time consuming. This work provides a computational tool for measuring creativity in visual programming that combines theory from the literature with data mining approaches. It adapts classical dimensions of creative processes to our setting, and considers new aspects such as visual elements of the visual programming projects. We apply our approach to the Scratch programming environment, measuring the creativity score of hundreds of projects. We show a preliminary comparison between our metrics and teacher ratings.
{"title":"Inferring Creativity in Visual Programming Environments","authors":"Anastasia Kovalkov, A. Segal, Y. Gal","doi":"10.1145/3386527.3406725","DOIUrl":"https://doi.org/10.1145/3386527.3406725","url":null,"abstract":"This paper explores the use of data analytics for identifying creativity in visual programming. Visual programming environments are increasingly included in the schools curriculum. Their potential for promoting creative thinking in students is an important factor in their adoption. However, there does not exist a standard approach for detecting creativity in students' programming behavior, and analyzing programs manually requires human expertise and is time consuming. This work provides a computational tool for measuring creativity in visual programming that combines theory from the literature with data mining approaches. It adapts classical dimensions of creative processes to our setting, and considers new aspects such as visual elements of the visual programming projects. We apply our approach to the Scratch programming environment, measuring the creativity score of hundreds of projects. We show a preliminary comparison between our metrics and teacher ratings.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79801952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of the Internet for learning provides a unique and growing opportunity to revisit the task of quantifying how much people have learned about a given subject in different regions around the world. Google alone receives over 5 billion searches a day and its publicly available data provides insight into learning process that is otherwise unobservable on a global scale. In this paper we, introduce the Computer Science Literacy-Proxy Index via Search (CSLI-s), a measure that utilizes online search data to make an educated guess around trends in computer science education. This measure uses a statistical signal processing technique to compose search volumes from a spectrum of topics into a coherent score. We intentionally explore and mitigate the biases of search data and, in the process, develop CSLI-s scores that correlate with traditional, more expensive metrics of learning. We then use search-trend data to measure patterns in subject literacy across countries and over time. To the best of our knowledge, this is the first measure of learning via Internet search-trends. The Internet is becoming a standard tool for learners and, as such, we anticipate search-trend data will have growing relevance to the learning science community.
{"title":"Using Google Search Trends to Estimate Global Patterns in Learning","authors":"S. Arslan, Mo Tiwari, C. Piech","doi":"10.1145/3386527.3405913","DOIUrl":"https://doi.org/10.1145/3386527.3405913","url":null,"abstract":"The use of the Internet for learning provides a unique and growing opportunity to revisit the task of quantifying how much people have learned about a given subject in different regions around the world. Google alone receives over 5 billion searches a day and its publicly available data provides insight into learning process that is otherwise unobservable on a global scale. In this paper we, introduce the Computer Science Literacy-Proxy Index via Search (CSLI-s), a measure that utilizes online search data to make an educated guess around trends in computer science education. This measure uses a statistical signal processing technique to compose search volumes from a spectrum of topics into a coherent score. We intentionally explore and mitigate the biases of search data and, in the process, develop CSLI-s scores that correlate with traditional, more expensive metrics of learning. We then use search-trend data to measure patterns in subject literacy across countries and over time. To the best of our knowledge, this is the first measure of learning via Internet search-trends. The Internet is becoming a standard tool for learners and, as such, we anticipate search-trend data will have growing relevance to the learning science community.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"289 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83141382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Empirical results have shown that deep neural networks achieve superior performance in the application of Knowledge Tracing. However, the design of recurrent cells like long short term memory (LSTM) cells or gated recurrent units (GRU) is influenced largely by applications in natural language processing. They were proposed and evaluated in the context of sequence to sequence modeling, like machine translation. Even though the LSTM cell works well for knowledge tracing, it is unknown if its architecture is ideally suited for knowledge tracing. Despite the fact that there are several recurrent neural network based architectures proposed for knowledge tracing, the methodologies rely on empirical observations and trial and error, which may not be efficient or scalable. In this study, we investigate using reinforcement learning for the automatic design of recurrent neural network cells for knowledge tracing, showing improved performance compared to the LSTM cell. We also discuss a potential method for model regularization using neural architecture search.
{"title":"Automatic RNN Cell Design for Knowledge Tracing using Reinforcement Learning","authors":"Xinyi Ding, Eric C. Larson","doi":"10.1145/3386527.3406729","DOIUrl":"https://doi.org/10.1145/3386527.3406729","url":null,"abstract":"Empirical results have shown that deep neural networks achieve superior performance in the application of Knowledge Tracing. However, the design of recurrent cells like long short term memory (LSTM) cells or gated recurrent units (GRU) is influenced largely by applications in natural language processing. They were proposed and evaluated in the context of sequence to sequence modeling, like machine translation. Even though the LSTM cell works well for knowledge tracing, it is unknown if its architecture is ideally suited for knowledge tracing. Despite the fact that there are several recurrent neural network based architectures proposed for knowledge tracing, the methodologies rely on empirical observations and trial and error, which may not be efficient or scalable. In this study, we investigate using reinforcement learning for the automatic design of recurrent neural network cells for knowledge tracing, showing improved performance compared to the LSTM cell. We also discuss a potential method for model regularization using neural architecture search.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81056592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steven Ritter, N. Heffernan, J. Williams, Burr Settles, Phillip J. Grimaldi, Derek J. Lomas
The emerging discipline of Learning Engineering is focused on putting into place tools and processes that use the science of learning as a basis for improving educational outcomes [3]. An important part of Learning Engineering focuses on improving the effectiveness of educational software. In many software domains, A/B testing has become a prominent technique to achieve the software’s goals [1]. Many large companies (Amazon, Google, Facebook, etc.) run thousands of AB tests and present at the Annual Conference on Digital Experimentation (CODE), but that venue is too broad to address AB testing issues specific to EdTech platforms. We see a need to address issues with running large-scale A/B tests within the educational context, where the use of A/B testing lags other industries. This workshop will explore ways in which A/B testing in educational contexts differs from other domains and proposals to overcome current challenges so that this approach can become a more useful tool in the learning engineer’s toolbox. Issues to be addressed are expected to include:
{"title":"Workshop Proposal: Educational A/B Testing at Scale","authors":"Steven Ritter, N. Heffernan, J. Williams, Burr Settles, Phillip J. Grimaldi, Derek J. Lomas","doi":"10.1145/3386527.3405933","DOIUrl":"https://doi.org/10.1145/3386527.3405933","url":null,"abstract":"The emerging discipline of Learning Engineering is focused on putting into place tools and processes that use the science of learning as a basis for improving educational outcomes [3]. An important part of Learning Engineering focuses on improving the effectiveness of educational software. In many software domains, A/B testing has become a prominent technique to achieve the software’s goals [1]. Many large companies (Amazon, Google, Facebook, etc.) run thousands of AB tests and present at the Annual Conference on Digital Experimentation (CODE), but that venue is too broad to address AB testing issues specific to EdTech platforms. We see a need to address issues with running large-scale A/B tests within the educational context, where the use of A/B testing lags other industries. This workshop will explore ways in which A/B testing in educational contexts differs from other domains and proposals to overcome current challenges so that this approach can become a more useful tool in the learning engineer’s toolbox. Issues to be addressed are expected to include:","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87468620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile learning is expanding rapidly due to its accessibility and affordability, especially in resource-poor parts of the world. Yet how students engage and learn with mobile learning has not been systematically analyzed at scale. This study examines how 93,819 Kenyan students in grades 6, 9, and 12 use a text message-based mobile learning platform that has millions of users across Sub-Saharan Africa. We investigate longitudinal variation in engagement over a one-year period for students in different age groups and check for evidence of learning gains using learning curve analysis. Student engagement is highest during school holidays and leading up to standardized exams, but persistence over time is low: under 25% of students return to the platform after joining. Clustering students into three groups based on their level of activity, we examine variation in their learning behaviors and quiz performance over their first ten days. Highly active students exhibit promising trends in terms of quiz completion, reattempts, and accuracy, but we do not see evidence of learning gains in this study. The findings suggest that students in Kenya use mobile learning either as an ad-hoc resource or a low-cost tutor to complement formal schooling and bridge gaps in instruction.
{"title":"Student Engagement in Mobile Learning via Text Message","authors":"René F. Kizilcec, Maximillian Chen","doi":"10.1145/3386527.3405921","DOIUrl":"https://doi.org/10.1145/3386527.3405921","url":null,"abstract":"Mobile learning is expanding rapidly due to its accessibility and affordability, especially in resource-poor parts of the world. Yet how students engage and learn with mobile learning has not been systematically analyzed at scale. This study examines how 93,819 Kenyan students in grades 6, 9, and 12 use a text message-based mobile learning platform that has millions of users across Sub-Saharan Africa. We investigate longitudinal variation in engagement over a one-year period for students in different age groups and check for evidence of learning gains using learning curve analysis. Student engagement is highest during school holidays and leading up to standardized exams, but persistence over time is low: under 25% of students return to the platform after joining. Clustering students into three groups based on their level of activity, we examine variation in their learning behaviors and quiz performance over their first ten days. Highly active students exhibit promising trends in terms of quiz completion, reattempts, and accuracy, but we do not see evidence of learning gains in this study. The findings suggest that students in Kenya use mobile learning either as an ad-hoc resource or a low-cost tutor to complement formal schooling and bridge gaps in instruction.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80118057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using five million responses to thousands of practice examination questions on an optional study service known as Problem Roulette, we explore subject-specific differences in assessment style, grade benefit from usage of the service, and differential features in study behavior and grade outcome by birth sex. Our study includes more than 20,000 students enrolled in eight terms of introductory courses in general chemistry, physics and statistics. Student responses in the space of accuracy and response time reveal domain differences; by these measures, physics problems are typically both more difficult and more complex. Grouping students by term-length practice volume, we find significant positive grade benefits to higher volumes of study in chemistry and statistics. Across all subjects, we find that females gain more grade benefit from high study volume than males. Female students also outwork males during prime study hours yet, on average, earn 0.13 ± 0.03 lower grade points in chemistry than males with the same response accuracy in practice, with null results in statistics and physics.
{"title":"Differential Assessment, Differential Benefit: Four-year Problem Roulette Analysis of STEM Practice Study","authors":"N. Weaverdyck, D. Anbajagane, A. Evrard","doi":"10.1145/3386527.3406731","DOIUrl":"https://doi.org/10.1145/3386527.3406731","url":null,"abstract":"Using five million responses to thousands of practice examination questions on an optional study service known as Problem Roulette, we explore subject-specific differences in assessment style, grade benefit from usage of the service, and differential features in study behavior and grade outcome by birth sex. Our study includes more than 20,000 students enrolled in eight terms of introductory courses in general chemistry, physics and statistics. Student responses in the space of accuracy and response time reveal domain differences; by these measures, physics problems are typically both more difficult and more complex. Grouping students by term-length practice volume, we find significant positive grade benefits to higher volumes of study in chemistry and statistics. Across all subjects, we find that females gain more grade benefit from high study volume than males. Female students also outwork males during prime study hours yet, on average, earn 0.13 ± 0.03 lower grade points in chemistry than males with the same response accuracy in practice, with null results in statistics and physics.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"228 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77450525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth Borneman, Joshua Littenberg-Tobias, J. Reich
Digital clinical simulations (DCSs) are a promising tool for professional learning on diversity, equity, and inclusion (DEI) issues across a variety of fields. Although digital clinical simulations can be integrated into large-scale learning environments, less is known about how to design these types of simulations so they can scale effectively. We describe the results of two studies of a digital clinical simulation tool called Jeremy's Journal. In Study 1, we implemented this simulation in an in-person workshop with a human facilitator. We found that participants described their learning experiences positively and reported changes in attitudes. In Study 2, we used the simulation within an online course but replaced the human facilitator with an asynchronous, text-based adaptation of the facilitation script. Although learners in Study 2 described the experience in the simulation positively, we did not observe changes in attitudes. We discuss the implications of these findings for the design of DCSs at scale
{"title":"Developing Digital Clinical Simulations for Large-Scale Settings on Diversity, Equity, and Inclusion: Design Considerations for Effective Implementation at Scale","authors":"Elizabeth Borneman, Joshua Littenberg-Tobias, J. Reich","doi":"10.1145/3386527.3405947","DOIUrl":"https://doi.org/10.1145/3386527.3405947","url":null,"abstract":"Digital clinical simulations (DCSs) are a promising tool for professional learning on diversity, equity, and inclusion (DEI) issues across a variety of fields. Although digital clinical simulations can be integrated into large-scale learning environments, less is known about how to design these types of simulations so they can scale effectively. We describe the results of two studies of a digital clinical simulation tool called Jeremy's Journal. In Study 1, we implemented this simulation in an in-person workshop with a human facilitator. We found that participants described their learning experiences positively and reported changes in attitudes. In Study 2, we used the simulation within an online course but replaced the human facilitator with an asynchronous, text-based adaptation of the facilitation script. Although learners in Study 2 described the experience in the simulation positively, we did not observe changes in attitudes. We discuss the implications of these findings for the design of DCSs at scale","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90270413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}