Block-based environments are today commonly used for introductory programming activities like those that are part of the Hour of Code campaign, which reaches millions of students. These activities typically consist of a static series of problems. Our aim is to make this type of activities more efficient by incorporating adaptive behavior. In this work, we discuss steps towards this goal, specifically a proposal and implementation of a programming game that supports both elementary problems and interesting programming challenges and thus provides an environment for meaningful adaptation. We also discuss methods of adaptivity and the issue of evaluating student performance while solving a problem.
{"title":"Towards making block-based programming activities adaptive","authors":"Tomáš Effenberger, Radek Pelánek","doi":"10.1145/3231644.3231670","DOIUrl":"https://doi.org/10.1145/3231644.3231670","url":null,"abstract":"Block-based environments are today commonly used for introductory programming activities like those that are part of the Hour of Code campaign, which reaches millions of students. These activities typically consist of a static series of problems. Our aim is to make this type of activities more efficient by incorporating adaptive behavior. In this work, we discuss steps towards this goal, specifically a proposal and implementation of a programming game that supports both elementary problems and interesting programming challenges and thus provides an environment for meaningful adaptation. We also discuss methods of adaptivity and the issue of evaluating student performance while solving a problem.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86854123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes ClassDB, a free and open source system to enable large-scale learning of data management. ClassDB is different from existing solutions in that the same system supports a wide range of data-management topics from introductory SQL to advanced "native analytics" where code in SQL and non-SQL languages (Python and R) run inside a database management system. Each student/team maintains their own sandbox which instructors can read and provide feedback. Both students and instructors can review activity logs to analyze progress and determine future course of action. ClassDB is currently in its second pilot and is scheduled for a larger trial later this year. After the trials, ClassDB will be made available to about 4,000 students in the university system, which comprises four universities and 12 community colleges. ClassDB is built in collaboration with students employing modern DevOps processes. Its source code and documentation are available in a public GitHub repository. ClassDB is work in progress.
{"title":"Toward a large-scale open learning system for data management","authors":"S. Murthy, Andrew Figueroa, Steven Rollo","doi":"10.1145/3231644.3231673","DOIUrl":"https://doi.org/10.1145/3231644.3231673","url":null,"abstract":"This paper describes ClassDB, a free and open source system to enable large-scale learning of data management. ClassDB is different from existing solutions in that the same system supports a wide range of data-management topics from introductory SQL to advanced \"native analytics\" where code in SQL and non-SQL languages (Python and R) run inside a database management system. Each student/team maintains their own sandbox which instructors can read and provide feedback. Both students and instructors can review activity logs to analyze progress and determine future course of action. ClassDB is currently in its second pilot and is scheduled for a larger trial later this year. After the trials, ClassDB will be made available to about 4,000 students in the university system, which comprises four universities and 12 community colleges. ClassDB is built in collaboration with students employing modern DevOps processes. Its source code and documentation are available in a public GitHub repository. ClassDB is work in progress.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89620522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Personalized educational systems adapt their behavior based on student performance. Most student modeling techniques, which are used for guiding the adaptation, utilize only the correctness of student's answers. However, other data about performance are typically available. In this work we focus on response times and wrong answers as these aspects of performance are available in most systems. We analyze data from several types of exercises and domains (mathematics, spelling, grammar). The results suggest that wrong answers are more informative than response times. Based on our results we propose a classification of student performance into several categories.
{"title":"Exploring the utility of response times and wrong answers for adaptive learning","authors":"Radek Pelánek","doi":"10.1145/3231644.3231675","DOIUrl":"https://doi.org/10.1145/3231644.3231675","url":null,"abstract":"Personalized educational systems adapt their behavior based on student performance. Most student modeling techniques, which are used for guiding the adaptation, utilize only the correctness of student's answers. However, other data about performance are typically available. In this work we focus on response times and wrong answers as these aspects of performance are available in most systems. We analyze data from several types of exercises and domains (mathematics, spelling, grammar). The results suggest that wrong answers are more informative than response times. Based on our results we propose a classification of student performance into several categories.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73767220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Viktoria Pammer-Schindler, S. Thalmann, Angela Fessl, Julia Füssel
Traditionally, professional learning for senior professionals is organized around face-2-face trainings. Virtual trainings seem to offer an opportunity to reduce costs related to travel and travel time. In this paper we present a comparative case study that investigates the differences between traditional face-2-face trainings in physical reality, and virtual trainings via WebEx. Our goal is to identify how the way of communication impacts interaction between trainees, between trainees and trainers, and how it impacts interruptions. We present qualitative results from observations and interviews of three cases in different setups (traditional classroom, web-based with all participants co-located, web-based with all participants at different locations) and with overall 25 training participants and three trainers. The study is set within one of the Big Four global auditing companies, with advanced senior auditors as learning cohort.
{"title":"Virtualizing face-2-face trainings for training senior professionals: a comparative case study on financial auditors","authors":"Viktoria Pammer-Schindler, S. Thalmann, Angela Fessl, Julia Füssel","doi":"10.1145/3231644.3231695","DOIUrl":"https://doi.org/10.1145/3231644.3231695","url":null,"abstract":"Traditionally, professional learning for senior professionals is organized around face-2-face trainings. Virtual trainings seem to offer an opportunity to reduce costs related to travel and travel time. In this paper we present a comparative case study that investigates the differences between traditional face-2-face trainings in physical reality, and virtual trainings via WebEx. Our goal is to identify how the way of communication impacts interaction between trainees, between trainees and trainers, and how it impacts interruptions. We present qualitative results from observations and interviews of three cases in different setups (traditional classroom, web-based with all participants co-located, web-based with all participants at different locations) and with overall 25 training participants and three trainers. The study is set within one of the Big Four global auditing companies, with advanced senior auditors as learning cohort.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89039458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Avneesh Sarwate, Creston Brunch, Jason Freeman, S. Siva
This paper explores some of the challenges posed by automated grading of programming assignments in a STEAM (Science, Technology, Engineering, Art, and Math) based curriculum, as well as how these challenges are addressed in the automatic grading processes used in EarSketch, a music-based educational programming environment developed at Georgia Tech. This work-in-progress paper reviews common strategies for grading programming assignments at scale and discusses how they are combined in EarSketch to evaluate open ended STEAM-focused assignments.
{"title":"Grading at scale in earsketch","authors":"Avneesh Sarwate, Creston Brunch, Jason Freeman, S. Siva","doi":"10.1145/3231644.3231708","DOIUrl":"https://doi.org/10.1145/3231644.3231708","url":null,"abstract":"This paper explores some of the challenges posed by automated grading of programming assignments in a STEAM (Science, Technology, Engineering, Art, and Math) based curriculum, as well as how these challenges are addressed in the automatic grading processes used in EarSketch, a music-based educational programming environment developed at Georgia Tech. This work-in-progress paper reviews common strategies for grading programming assignments at scale and discusses how they are combined in EarSketch to evaluate open ended STEAM-focused assignments.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"373 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76413755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Support of discussion based learning at scale benefits from automated analysis of discussion for enabling effective assignment of students to project teams, for triggering dynamic support of group learning processes, and for assessment of those learning processes. A major limitation of much past work in machine learning applied to automated analysis of discussion is the failure of the models to generalize to data outside of the parameters of the context in which the training data was collected. This limitation means that a separate training effort must be undertaken for each domain in which the models will be used. This paper focuses on a specific construct of discussion based learning referred to as Transactivity and provides a novel machine learning approach with performance that exceeds state-of-the-art performance within the same domain in which it was trained and a new domain, and does not suffer any reduction in performance when transferring to the new domain. These results stand as an advance over past work on automated detection of Transactivity and increase the value of trained models for supporting group learning at scale. Implications for practice in at-scale learning environments are discussed.
{"title":"Towards domain general detection of transactive knowledge building behavior","authors":"James Fiacco, C. Rosé","doi":"10.1145/3231644.3231655","DOIUrl":"https://doi.org/10.1145/3231644.3231655","url":null,"abstract":"Support of discussion based learning at scale benefits from automated analysis of discussion for enabling effective assignment of students to project teams, for triggering dynamic support of group learning processes, and for assessment of those learning processes. A major limitation of much past work in machine learning applied to automated analysis of discussion is the failure of the models to generalize to data outside of the parameters of the context in which the training data was collected. This limitation means that a separate training effort must be undertaken for each domain in which the models will be used. This paper focuses on a specific construct of discussion based learning referred to as Transactivity and provides a novel machine learning approach with performance that exceeds state-of-the-art performance within the same domain in which it was trained and a new domain, and does not suffer any reduction in performance when transferring to the new domain. These results stand as an advance over past work on automated detection of Transactivity and increase the value of trained models for supporting group learning at scale. Implications for practice in at-scale learning environments are discussed.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"71 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81473309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a mobile game app (EUR Game) that has been designed to complement teaching and learning in higher education. The mobile game app can be used by teachers to gauge how well students are meeting the learning objectives. Teachers can use the information to provide 'just-in-time' support and adapt their lessons accordingly. For the students, the game app is a study tool that can be used to test their own understanding and monitor their study progress. This, in turn, supports students' self-regulated learning. Gamification elements are also included in the game app to enhance the learning experience. During the demonstration, participants will experience the features of the game app and be engaged in an interactive session to explore the possible ways to use the mobile game app to support teaching and learning.
{"title":"Gamifying higher education: enhancing learning with mobile game app","authors":"Farshida Zafar, Jacqueline Wong, Mohammad Khalil","doi":"10.1145/3231644.3231686","DOIUrl":"https://doi.org/10.1145/3231644.3231686","url":null,"abstract":"We present a mobile game app (EUR Game) that has been designed to complement teaching and learning in higher education. The mobile game app can be used by teachers to gauge how well students are meeting the learning objectives. Teachers can use the information to provide 'just-in-time' support and adapt their lessons accordingly. For the students, the game app is a study tool that can be used to test their own understanding and monitor their study progress. This, in turn, supports students' self-regulated learning. Gamification elements are also included in the game app to enhance the learning experience. During the demonstration, participants will experience the features of the game app and be engaged in an interactive session to explore the possible ways to use the mobile game app to support teaching and learning.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86641163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, non-credit options for learning at scale have outpaced for-credit options. To scale for-credit options, workflows and policies must be devised to preserve the characteristics of accredited higher education---such as the presumption of human evaluation and an assertion of academic integrity---despite increased scale. These efforts must follow as well with shifting from offering isolated courses (or informal collections thereof) to offering full degree programs with additional administrative elements. We see this shift as one from Massive Open Online Courses (MOOCs) to Large, Internet-Mediated Asynchronous Degrees (Limeades). In this work, we perform a qualitative research study on one such program that has scaled to 6,500 students while retaining full accreditation. We report a typology of policies and workflows employed by the individual classes to deliver this experience.
{"title":"Squeezing the limeade: policies and workflows for scalable online degrees","authors":"David A. Joyner","doi":"10.1145/3231644.3231649","DOIUrl":"https://doi.org/10.1145/3231644.3231649","url":null,"abstract":"In recent years, non-credit options for learning at scale have outpaced for-credit options. To scale for-credit options, workflows and policies must be devised to preserve the characteristics of accredited higher education---such as the presumption of human evaluation and an assertion of academic integrity---despite increased scale. These efforts must follow as well with shifting from offering isolated courses (or informal collections thereof) to offering full degree programs with additional administrative elements. We see this shift as one from Massive Open Online Courses (MOOCs) to Large, Internet-Mediated Asynchronous Degrees (Limeades). In this work, we perform a qualitative research study on one such program that has scaled to 6,500 students while retaining full accreditation. We report a typology of policies and workflows employed by the individual classes to deliver this experience.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86743479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates randomization on asynchronous exams as a defense against collaborative cheating. Asynchronous exams are those for which students take the exam at different times, potentially across a multi-day exam period. Collaborative cheating occurs when one student (the information producer) takes the exam early and passes information about the exam to other students (the information consumers) that are taking the exam later. Using a dataset of computerized exam and homework problems in a single course with 425 students, we identified 5.5% of students (on average) as information consumers by their disproportionate studying of problems that were on the exam. These information consumers ("cheaters") had a significant advantage (13 percentage points on average) when every student was given the same exam problem (even when the parameters are randomized for each student), but that advantage dropped to almost negligible levels (2--3 percentage points) when students were given a random problem from a pool of two or four problems. We conclude that randomization with pools of four (or even three) problems, which also contain randomized parameters, is an effective mitigation for collaborative cheating. Our analysis suggests that this mitigation is in part explained by cheating students having less complete information about larger pools.
{"title":"How much randomization is needed to deter collaborative cheating on asynchronous exams?","authors":"Binglin Chen, Matthew West, C. Zilles","doi":"10.1145/3231644.3231664","DOIUrl":"https://doi.org/10.1145/3231644.3231664","url":null,"abstract":"This paper investigates randomization on asynchronous exams as a defense against collaborative cheating. Asynchronous exams are those for which students take the exam at different times, potentially across a multi-day exam period. Collaborative cheating occurs when one student (the information producer) takes the exam early and passes information about the exam to other students (the information consumers) that are taking the exam later. Using a dataset of computerized exam and homework problems in a single course with 425 students, we identified 5.5% of students (on average) as information consumers by their disproportionate studying of problems that were on the exam. These information consumers (\"cheaters\") had a significant advantage (13 percentage points on average) when every student was given the same exam problem (even when the parameters are randomized for each student), but that advantage dropped to almost negligible levels (2--3 percentage points) when students were given a random problem from a pool of two or four problems. We conclude that randomization with pools of four (or even three) problems, which also contain randomized parameters, is an effective mitigation for collaborative cheating. Our analysis suggests that this mitigation is in part explained by cheating students having less complete information about larger pools.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87612934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A typical problem in MOOCs is the missing opportunity for course conductors to individually support students in overcoming their problems and misconceptions. This paper presents the results of automatically intervening on struggling students during programming exercises and offering peer feedback and tailored bonus exercises. To improve learning success, we do not want to abolish instructionally desired trial and error but reduce extensive struggle and demotivation. Therefore, we developed adaptive automatic just-in-time interventions to encourage students to ask for help if they require considerably more than average working time to solve an exercise. Additionally, we offered students bonus exercises tailored for their individual weaknesses. The approach was evaluated within a live course with over 5,000 active students via a survey and metrics gathered alongside. Results show that we can increase the call outs for help by up to 66% and lower the dwelling time until issuing action. Learnings from the experiments can further be used to pinpoint course material to be improved and tailor content to be audience specific.
{"title":"Effects of automated interventions in programming assignments: evidence from a field experiment","authors":"Ralf Teusner, Thomas Hille, T. Staubitz","doi":"10.1145/3231644.3231650","DOIUrl":"https://doi.org/10.1145/3231644.3231650","url":null,"abstract":"A typical problem in MOOCs is the missing opportunity for course conductors to individually support students in overcoming their problems and misconceptions. This paper presents the results of automatically intervening on struggling students during programming exercises and offering peer feedback and tailored bonus exercises. To improve learning success, we do not want to abolish instructionally desired trial and error but reduce extensive struggle and demotivation. Therefore, we developed adaptive automatic just-in-time interventions to encourage students to ask for help if they require considerably more than average working time to solve an exercise. Additionally, we offered students bonus exercises tailored for their individual weaknesses. The approach was evaluated within a live course with over 5,000 active students via a survey and metrics gathered alongside. Results show that we can increase the call outs for help by up to 66% and lower the dwelling time until issuing action. Learnings from the experiments can further be used to pinpoint course material to be improved and tailor content to be audience specific.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80557055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}