Code Hunt (http://www.codehunt.com/) is an educational coding game (that runs in a browser) for teaching and learning computer science at scale. The game consists of a series of worlds and levels, which get increasingly challenging. In each level, the player has to discover a secret code fragment and write code for it. The game has sounds and a leaderboard to keep the player engaged. Code Hunt targets teachers and students from introductory to advanced programming or software engineering courses. In addition, Code Hunt can be used by seasoned developers to hone their programming skills or by companies to evaluate job candidates. At the core of the game experience is an automated program analysis and grading engine based on dynamic symbolic execution. The engine detects any behavioral differences between the player's code and the secret code fragment. The game works in any modern browser, and currently supports C# or Java programs. Code Hunt is a dramatic evolution of our earlier Pex4Fun web platform, from which we have gathered considerable experience (including over 1.4 million programs submitted by users).
{"title":"Code hunt: gamifying teaching and learning of computer science at scale","authors":"N. Tillmann, J. D. Halleux, Tao Xie, J. Bishop","doi":"10.1145/2556325.2567870","DOIUrl":"https://doi.org/10.1145/2556325.2567870","url":null,"abstract":"Code Hunt (http://www.codehunt.com/) is an educational coding game (that runs in a browser) for teaching and learning computer science at scale. The game consists of a series of worlds and levels, which get increasingly challenging. In each level, the player has to discover a secret code fragment and write code for it. The game has sounds and a leaderboard to keep the player engaged. Code Hunt targets teachers and students from introductory to advanced programming or software engineering courses. In addition, Code Hunt can be used by seasoned developers to hone their programming skills or by companies to evaluate job candidates. At the core of the game experience is an automated program analysis and grading engine based on dynamic symbolic execution. The engine detects any behavioral differences between the player's code and the secret code fragment. The game works in any modern browser, and currently supports C# or Java programs. Code Hunt is a dramatic evolution of our earlier Pex4Fun web platform, from which we have gathered considerable experience (including over 1.4 million programs submitted by users).","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89624589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer Sabourin, Lucy Kosturko, Scott W. McQuiggan
With instructional methods such as MOOCs and flipped classrooms rapidly gaining popularity and school budget cuts becoming more prevalent across the nation, increasing the usability of Open Educational Resources (OER) is highly relevant for today's educators. Although several OER databases exist providing access to hundreds of thousands of resources, navigating these spaces, evaluating resources, and integrating them within classroom instruction has proven less than efficient. The present research explores learning analytics for understanding real-world interaction patterns with SAS® Curriculum Pathways®, which has over 120,000 active teacher users and over 1,300 freely available resources across multiple disciplines. In this preliminary investigation, users are clustered based on overall usage patterns. Patterns of resource interaction are then identified using association analysis. Results of this exploratory investigation provide insight into how users interact with large OER databases and introduce many avenues for continued investigation.
{"title":"Teacher usage behaviors within an online open educational resource repository","authors":"Jennifer Sabourin, Lucy Kosturko, Scott W. McQuiggan","doi":"10.1145/2556325.2567875","DOIUrl":"https://doi.org/10.1145/2556325.2567875","url":null,"abstract":"With instructional methods such as MOOCs and flipped classrooms rapidly gaining popularity and school budget cuts becoming more prevalent across the nation, increasing the usability of Open Educational Resources (OER) is highly relevant for today's educators. Although several OER databases exist providing access to hundreds of thousands of resources, navigating these spaces, evaluating resources, and integrating them within classroom instruction has proven less than efficient. The present research explores learning analytics for understanding real-world interaction patterns with SAS® Curriculum Pathways®, which has over 120,000 active teacher users and over 1,300 freely available resources across multiple disciplines. In this preliminary investigation, users are clustered based on overall usage patterns. Patterns of resource interaction are then identified using association analysis. Results of this exploratory investigation provide insight into how users interact with large OER databases and introduce many avenues for continued investigation.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74524283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sergiy O. Nesterko, Daniel T. Seaton, J. Reich, Joseph McIntyre, Qiuyi Han, Isaac L. Chuang, Andrew D. Ho
Massive Open Online Courses (MOOCs) employ a variety of components to engage students in learning (eg. videos, forums, quizzes). Some components are graded, which means that they play a key role in a student's final grade and certificate attainment. It is not yet clear how the due date structure of graded components affects student outcomes including academic performance and alternative modes of learning of students. Using data from HarvardX and MITx, Harvard's and MIT's divisions for online learning, we study the structure of due dates on graded components for 10 completed MOOCs. We find that stricter due dates are associated with higher certificate attainment rates but fewer students who join late being able to earn a certificate. Our findings motivate further studies of how the use of graded components and deadlines affects academic and alternative learning of MOOC students, and can help inform the design of online courses.
{"title":"Due dates in MOOCs: does stricter mean better?","authors":"Sergiy O. Nesterko, Daniel T. Seaton, J. Reich, Joseph McIntyre, Qiuyi Han, Isaac L. Chuang, Andrew D. Ho","doi":"10.1145/2556325.2567877","DOIUrl":"https://doi.org/10.1145/2556325.2567877","url":null,"abstract":"Massive Open Online Courses (MOOCs) employ a variety of components to engage students in learning (eg. videos, forums, quizzes). Some components are graded, which means that they play a key role in a student's final grade and certificate attainment. It is not yet clear how the due date structure of graded components affects student outcomes including academic performance and alternative modes of learning of students. Using data from HarvardX and MITx, Harvard's and MIT's divisions for online learning, we study the structure of due dates on graded components for 10 completed MOOCs. We find that stricter due dates are associated with higher certificate attainment rates but fewer students who join late being able to earn a certificate. Our findings motivate further studies of how the use of graded components and deadlines affects academic and alternative learning of MOOC students, and can help inform the design of online courses.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77143428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacquelin A. Speck, Eugene Gualtieri, G. Naik, Thach Nguyen, Kevin K. F. Cheung, L. Alexander, David E. Fenske
Since introducing Internet-based distance education programs in 1996, Drexel University has gained recognition as an online education leader. Remaining at the vanguard means finding innovative, automated solutions to determine which students are contributing to thoughtful discussion, helping faculty engage with online students more efficiently, and spending less time managing ever more complex Learning Management Systems (LMS). We introduce ForumDash, a BBLearn plugin for the Blackboard LMS1, designed to enhance online learning. Through its three visualization tools, ForumDash shows instructors which students are contributing, struggling, or distracted, thereby helping instructors target their efforts, save time managing online courses, and scale course tools up to the level of Massive Open Online Courses (MOOCs). ForumDash also provides students with performance feedback, showing them whether their participation levels are satisfactory. Initial testing with two Drexel University Online courses produced positive feedback, and larger scale testing is in progress.
{"title":"ForumDash: analyzing online discussion forums","authors":"Jacquelin A. Speck, Eugene Gualtieri, G. Naik, Thach Nguyen, Kevin K. F. Cheung, L. Alexander, David E. Fenske","doi":"10.1145/2556325.2567848","DOIUrl":"https://doi.org/10.1145/2556325.2567848","url":null,"abstract":"Since introducing Internet-based distance education programs in 1996, Drexel University has gained recognition as an online education leader. Remaining at the vanguard means finding innovative, automated solutions to determine which students are contributing to thoughtful discussion, helping faculty engage with online students more efficiently, and spending less time managing ever more complex Learning Management Systems (LMS). We introduce ForumDash, a BBLearn plugin for the Blackboard LMS1, designed to enhance online learning. Through its three visualization tools, ForumDash shows instructors which students are contributing, struggling, or distracted, thereby helping instructors target their efforts, save time managing online courses, and scale course tools up to the level of Massive Open Online Courses (MOOCs). ForumDash also provides students with performance feedback, showing them whether their participation levels are satisfactory. Initial testing with two Drexel University Online courses produced positive feedback, and larger scale testing is in progress.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90971271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Derrick Coetzee, A. Fox, Marti A. Hearst, Bjoern Hartmann
We study effects of introducing a real-time chatroom into a massive open online course with several thousand students, supplementing an existing forum. The chatroom was supported by teaching assistants, and generated thousands of lines of discussion by 28% of 681 consenting chat condition participants, mostly on-topic. Despite this, chat activity remained low ($mu=8.2$ messages per hour) and we could find no significant effect of chat use on objective or subjective dependent variables such as grades, retention, forum participation, or students' sense of community. Further investigation reveals that only 12% of chat participants have substantive interactions, while the remainder are either passive or have trivial interactions that are unlikely to result in learning. We also find that pervasive, highly visible chat interfaces are highly effective in encouraging both active and substantive participation in chat. When compared to chat interfaces that are restricted to a single webpage, the pervasive interface exhibits changes{2.8 times} as many users with substantive interactions.
{"title":"Chatrooms in MOOCs: all talk and no action","authors":"Derrick Coetzee, A. Fox, Marti A. Hearst, Bjoern Hartmann","doi":"10.1145/2556325.2566242","DOIUrl":"https://doi.org/10.1145/2556325.2566242","url":null,"abstract":"We study effects of introducing a real-time chatroom into a massive open online course with several thousand students, supplementing an existing forum. The chatroom was supported by teaching assistants, and generated thousands of lines of discussion by 28% of 681 consenting chat condition participants, mostly on-topic. Despite this, chat activity remained low ($mu=8.2$ messages per hour) and we could find no significant effect of chat use on objective or subjective dependent variables such as grades, retention, forum participation, or students' sense of community. Further investigation reveals that only 12% of chat participants have substantive interactions, while the remainder are either passive or have trivial interactions that are unlikely to result in learning. We also find that pervasive, highly visible chat interfaces are highly effective in encouraging both active and substantive participation in chat. When compared to chat interfaces that are restricted to a single webpage, the pervasive interface exhibits changes{2.8 times} as many users with substantive interactions.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83216250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Rosé, R. Carlson, Diyi Yang, Miaomiao Wen, L. Resnick, Pam Goldman, Jennifer Sherer
In this paper, we explore student dropout behavior in a Massively Open Online Course (MOOC). We use a survival model to measure the impact of three social factors that make predictions about attrition along the way for students who have participated in the course discussion forum.
{"title":"Social factors that contribute to attrition in MOOCs","authors":"C. Rosé, R. Carlson, Diyi Yang, Miaomiao Wen, L. Resnick, Pam Goldman, Jennifer Sherer","doi":"10.1145/2556325.2567879","DOIUrl":"https://doi.org/10.1145/2556325.2567879","url":null,"abstract":"In this paper, we explore student dropout behavior in a Massively Open Online Course (MOOC). We use a survival model to measure the impact of three social factors that make predictions about attrition along the way for students who have participated in the course discussion forum.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81492578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chinmay Kulkarni, R. Socher, Michael S. Bernstein, Scott R. Klemmer
Peer assessment helps students reflect and exposes them to different ideas. It scales assessment and allows large online classes to use open-ended assignments. However, it requires students to spend significant time grading. How can we lower this grading burden while maintaining quality? This paper integrates peer and machine grading to preserve the robustness of peer assessment and lower grading burden. In the identify-verify pattern, a grading algorithm first predicts a student grade and estimates confidence, which is used to estimate the number of peer raters required. Peers then identify key features of the answer using a rubric. Finally, other peers verify whether these feature labels were accurately applied. This pattern adjusts the number of peers that evaluate an answer based on algorithmic confidence and peer agreement. We evaluated this pattern with 1370 students in a large, online design class. With only 54% of the student grading time, the identify-verify pattern yields 80-90% of the accuracy obtained by taking the median of three peer scores, and provides more detailed feedback. A second experiment found that verification dramatically improves accuracy with more raters, with a 20% gain over the peer-median with four raters. However, verification also leads to lower initial trust in the grading system. The identify-verify pattern provides an example of how peer work and machine learning can combine to improve the learning experience.
{"title":"Scaling short-answer grading by combining peer assessment with algorithmic scoring","authors":"Chinmay Kulkarni, R. Socher, Michael S. Bernstein, Scott R. Klemmer","doi":"10.1145/2556325.2566238","DOIUrl":"https://doi.org/10.1145/2556325.2566238","url":null,"abstract":"Peer assessment helps students reflect and exposes them to different ideas. It scales assessment and allows large online classes to use open-ended assignments. However, it requires students to spend significant time grading. How can we lower this grading burden while maintaining quality? This paper integrates peer and machine grading to preserve the robustness of peer assessment and lower grading burden. In the identify-verify pattern, a grading algorithm first predicts a student grade and estimates confidence, which is used to estimate the number of peer raters required. Peers then identify key features of the answer using a rubric. Finally, other peers verify whether these feature labels were accurately applied. This pattern adjusts the number of peers that evaluate an answer based on algorithmic confidence and peer agreement. We evaluated this pattern with 1370 students in a large, online design class. With only 54% of the student grading time, the identify-verify pattern yields 80-90% of the accuracy obtained by taking the median of three peer scores, and provides more detailed feedback. A second experiment found that verification dramatically improves accuracy with more raters, with a 20% gain over the peer-median with four raters. However, verification also leads to lower initial trust in the grading system. The identify-verify pattern provides an example of how peer work and machine learning can combine to improve the learning experience.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75750471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many online courses, information about learners is collected via surveys for accounting, instructional design, and research purposes. Aggregate information from such surveys is frequently reported in news articles and research papers, among other publications. While some authors acknowledge the potential bias due to non-response in course surveys, there are no investigations on the severity of the bias and methods for bias reduction in the online education context. A regression-based response-propensity model is described and applied to reweight a course survey, and discrepancies between adjusted and unadjusted outcome distributions are provided.
{"title":"Reducing non-response bias with survey reweighting: applications for online learning researchers","authors":"René F. Kizilcec","doi":"10.1145/2556325.2567850","DOIUrl":"https://doi.org/10.1145/2556325.2567850","url":null,"abstract":"In many online courses, information about learners is collected via surveys for accounting, instructional design, and research purposes. Aggregate information from such surveys is frequently reported in news articles and research papers, among other publications. While some authors acknowledge the potential bias due to non-response in course surveys, there are no investigations on the severity of the bias and methods for bias reduction in the online education context. A regression-based response-propensity model is described and applied to reweight a course survey, and discrepancies between adjusted and unadjusted outcome distributions are provided.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":"247 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76014013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Asuncion, J. D. Haan, M. Mohri, Kayur Patel, Afshin Rostamizadeh, Umar Syed, Lauren Wong
Google Research recently tested a massive online class model for an internal engineering education program, with machine learning as the topic, that blended theoretical concepts and Google-specific software tool tutorials. The goal of this training was to foster engineering capacity to leverage machine learning tools in future products. The course was delivered both synchronously and asynchronously, and students had the choice between studying independently or participating with a group. Since all students are company employees, unlike most publicly offered MOOCs we can continue to measure the students' behavioral change long after the course is complete. This paper describes the course, outlines the available data set and presents directions for analysis.
{"title":"Corporate learning at scale: lessons from a large online course at google","authors":"A. Asuncion, J. D. Haan, M. Mohri, Kayur Patel, Afshin Rostamizadeh, Umar Syed, Lauren Wong","doi":"10.1145/2556325.2567874","DOIUrl":"https://doi.org/10.1145/2556325.2567874","url":null,"abstract":"Google Research recently tested a massive online class model for an internal engineering education program, with machine learning as the topic, that blended theoretical concepts and Google-specific software tool tutorials. The goal of this training was to foster engineering capacity to leverage machine learning tools in future products. The course was delivered both synchronously and asynchronously, and students had the choice between studying independently or participating with a group. Since all students are company employees, unlike most publicly offered MOOCs we can continue to measure the students' behavioral change long after the course is complete. This paper describes the course, outlines the available data set and presents directions for analysis.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76369559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While there is a large amount of work on creating autograded massive open online courses (MOOCs), some kinds of complex, qualitative exam questions are still beyond the current state of the art. For MOOCs that need to deal with these kinds of questions, it is not possible for a small course staff to grade students' qualitative work. To test the efficacy of self-evaluation as a method for complex-question evaluation, students in two Google MOOCs have submitted projects and evaluated their own work. For both courses, teaching assistants graded a random sample of papers and compared their grades with self-evaluated student grades. We found that many of the submitted projects were of very high quality, and that a large majority of self-evaluated projects were accurately evaluated, scoring within just a few points of the gold standard grading.
{"title":"Self-evaluation in advanced power searching and mapping with google MOOCs","authors":"Julia Wilkowski, D. Russell, Amit Deutsch","doi":"10.1145/2556325.2566241","DOIUrl":"https://doi.org/10.1145/2556325.2566241","url":null,"abstract":"While there is a large amount of work on creating autograded massive open online courses (MOOCs), some kinds of complex, qualitative exam questions are still beyond the current state of the art. For MOOCs that need to deal with these kinds of questions, it is not possible for a small course staff to grade students' qualitative work. To test the efficacy of self-evaluation as a method for complex-question evaluation, students in two Google MOOCs have submitted projects and evaluated their own work. For both courses, teaching assistants graded a random sample of papers and compared their grades with self-evaluated student grades. We found that many of the submitted projects were of very high quality, and that a large majority of self-evaluated projects were accurately evaluated, scoring within just a few points of the gold standard grading.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76803609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}