A common pattern among undergraduate computer science curriculums is to teach an introductory subject in Python followed by a more advanced software engineering subject in Java. We are building an online tool that will help students who already know Python learn the syntax and semantics of Java. Our system will differ from existing online tutors and tools for learning Java in two main aspects. First, our tutor will focus on the transition from Python to Java. Using this basis will allow us to gloss over basic concepts of programming which students are already familiar with and focus on the specifics of Java. Second, our tutor will crowdsource writing test cases for problems to the learners themselves. This will give students practice writing tests, and will also reduce the burden on instructors, who would otherwise need to implement test suites for every problem in the tutor.
{"title":"Java tutor: bootstrapping with python to learn Java","authors":"Casey O'Brien, Max Goldman, Rob Miller","doi":"10.1145/2556325.2567873","DOIUrl":"https://doi.org/10.1145/2556325.2567873","url":null,"abstract":"A common pattern among undergraduate computer science curriculums is to teach an introductory subject in Python followed by a more advanced software engineering subject in Java. We are building an online tool that will help students who already know Python learn the syntax and semantics of Java. Our system will differ from existing online tutors and tools for learning Java in two main aspects. First, our tutor will focus on the transition from Python to Java. Using this basis will allow us to gloss over basic concepts of programming which students are already familiar with and focus on the specifics of Java. Second, our tutor will crowdsource writing test cases for problems to the learners themselves. This will give students practice writing tests, and will also reduce the burden on instructors, who would otherwise need to implement test suites for every problem in the tutor.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77969379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Students who registered for the Mapping with Google massive open online course (MOOC) were asked several questions during the registration process to identify prior experience with eleven skills as well as their goals for registering for the course. Students selected goals from a list; they were periodically reminded of these goals during the MOOC. At the end of the course, we compared students' self reports of goal achievement on a post-course survey with behavioral click-stream analysis. In addition, we assessed how well prior skill in a subject predicts a student's course completion and found no correlation. Our research shows that students who completed course activities were more likely to earn certificates of completion than peers who did not.
{"title":"Student skill and goal achievement in the mapping with google MOOC","authors":"Julia Wilkowski, Amit Deutsch, D. Russell","doi":"10.1145/2556325.2566240","DOIUrl":"https://doi.org/10.1145/2556325.2566240","url":null,"abstract":"Students who registered for the Mapping with Google massive open online course (MOOC) were asked several questions during the registration process to identify prior experience with eleven skills as well as their goals for registering for the course. Students selected goals from a list; they were periodically reminded of these goals during the MOOC. At the end of the course, we compared students' self reports of goal achievement on a post-course survey with behavioral click-stream analysis. In addition, we assessed how well prior skill in a subject predicts a student's course completion and found no correlation. Our research shows that students who completed course activities were more likely to earn certificates of completion than peers who did not.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75305543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Massive open online courses (MOOCs) provide learning materials and automated assessments for large numbers of virtual users. Because every interaction is recorded, we can longitudinally model performance over the course of the class. We create a panel model of achievement in an early MOOC to estimate within- and between-user differences. In this study, we hope to contribute to HCI literature by, first, applying quasi-experimental methods to identify behaviors that may support student learning in a virtual environment, and, second, by using a panel model that takes into account the longitudinal, dynamic nature of a multiple-week class.
{"title":"Tracking progress: predictors of students' weekly achievement during a circuits and electronics MOOC","authors":"Jennifer DeBoer, L. Breslow","doi":"10.1145/2556325.2567863","DOIUrl":"https://doi.org/10.1145/2556325.2567863","url":null,"abstract":"Massive open online courses (MOOCs) provide learning materials and automated assessments for large numbers of virtual users. Because every interaction is recorded, we can longitudinally model performance over the course of the class. We create a panel model of achievement in an early MOOC to estimate within- and between-user differences. In this study, we hope to contribute to HCI literature by, first, applying quasi-experimental methods to identify behaviors that may support student learning in a virtual environment, and, second, by using a panel model that takes into account the longitudinal, dynamic nature of a multiple-week class.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78961712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Huang, Anirban Dasgupta, Arpita Ghosh, Jane Manning, Marc Sanders
Discussion forums, employed by MOOC providers as the primary mode of interaction among instructors and students, have emerged as one of the important components of online courses. We empirically study contribution behavior in these online collaborative learning forums using data from 44 MOOCs hosted on Coursera, focusing primarily on the highest-volume contributors---"superposters"---in a forum. We explore who these superposters are and study their engagement patterns across the MOOC platform, with a focus on the following question---to what extent is superposting a positive phenomenon for the forum? Specifically, while superposters clearly contribute heavily to the forum in terms of quantity, how do these contributions rate in terms of quality, and does this prolific posting behavior negatively impact contribution from the large remainder of students in the class? We analyze these questions across the courses in our dataset, and find that superposters display above-average engagement across Coursera, enrolling in more courses and obtaining better grades than the average forum participant; additionally, students who are superposters in one course are significantly more likely to be superposters in other courses they take. In terms of utility, our analysis indicates that while being neither the fastest nor the most upvoted, superposters' responses are speedier and receive more upvotes than the average forum user's posts; a manual assessment of quality on a subset of this content supports this conclusion that a large fraction of superposter contributions indeed constitute useful content. Finally, we find that superposters' prolific contribution behavior does not `drown out the silent majority'---high superposter activity correlates positively and significantly with higher overall activity and forum health, as measured by total contribution volume, higher average perceived utility in terms of received votes, and a smaller fraction of orphaned threads.
{"title":"Superposter behavior in MOOC forums","authors":"Jonathan Huang, Anirban Dasgupta, Arpita Ghosh, Jane Manning, Marc Sanders","doi":"10.1145/2556325.2566249","DOIUrl":"https://doi.org/10.1145/2556325.2566249","url":null,"abstract":"Discussion forums, employed by MOOC providers as the primary mode of interaction among instructors and students, have emerged as one of the important components of online courses. We empirically study contribution behavior in these online collaborative learning forums using data from 44 MOOCs hosted on Coursera, focusing primarily on the highest-volume contributors---\"superposters\"---in a forum. We explore who these superposters are and study their engagement patterns across the MOOC platform, with a focus on the following question---to what extent is superposting a positive phenomenon for the forum? Specifically, while superposters clearly contribute heavily to the forum in terms of quantity, how do these contributions rate in terms of quality, and does this prolific posting behavior negatively impact contribution from the large remainder of students in the class? We analyze these questions across the courses in our dataset, and find that superposters display above-average engagement across Coursera, enrolling in more courses and obtaining better grades than the average forum participant; additionally, students who are superposters in one course are significantly more likely to be superposters in other courses they take. In terms of utility, our analysis indicates that while being neither the fastest nor the most upvoted, superposters' responses are speedier and receive more upvotes than the average forum user's posts; a manual assessment of quality on a subset of this content supports this conclusion that a large fraction of superposter contributions indeed constitute useful content. Finally, we find that superposters' prolific contribution behavior does not `drown out the silent majority'---high superposter activity correlates positively and significantly with higher overall activity and forum health, as measured by total contribution volume, higher average perceived utility in terms of received votes, and a smaller fraction of orphaned threads.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80638204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kathryn Papadopoulos, Lalida Sritanyaratana, Scott R. Klemmer
Massive online courses introduced Community TAs (CTAs) to help scale teaching staff support. CTAs are former top students who return as volunteer course staff. We studied CTAs in 3 classes on Coursera, including interviews and surveys from a Human-Computer Interaction (HCI) class. A key benefit of CTAs is their brokering role that mediates staff and student goals. CTAs provide greater discussion forum coverage (both in quantity and time of day) compared to instructor and Head TA (HTA) capabilities and contribute to peer assessment. As CTAs are new teachers, physically distributed, and culturally diverse, clear division of responsibilities is especially important.
{"title":"Community TAs scale high-touch learning, provide student-staff brokering, and build esprit de corps","authors":"Kathryn Papadopoulos, Lalida Sritanyaratana, Scott R. Klemmer","doi":"10.1145/2556325.2567860","DOIUrl":"https://doi.org/10.1145/2556325.2567860","url":null,"abstract":"Massive online courses introduced Community TAs (CTAs) to help scale teaching staff support. CTAs are former top students who return as volunteer course staff. We studied CTAs in 3 classes on Coursera, including interviews and surveys from a Human-Computer Interaction (HCI) class. A key benefit of CTAs is their brokering role that mediates staff and student goals. CTAs provide greater discussion forum coverage (both in quantity and time of day) compared to instructor and Head TA (HTA) capabilities and contribute to peer assessment. As CTAs are new teachers, physically distributed, and culturally diverse, clear division of responsibilities is especially important.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77996958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hints are sometimes used in online learning system to help students when they are having difficulties. However, in all of the systems we are aware of, the hints are fixed ahead of time and do not depend on the unsuccessful attempts the student has already made. This severely limits the effectiveness of the hints. We have developed an alternative system for giving hints to students. The main difference is that the system allows an instructor to send a hint to a student after the student has made several attempts to solve the problem and failed. After analyzing the student's mistakes, the instructor is better able to understand the problem in the student's thinking and send them a more helpful hint. We have deployed this system in a probability and statistics course with 176 students. We have demonstrated the superiority of the new hints methodology over the traditional one. The limiting factor on the effectiveness of our system is the amount of manual labor required to send each hint. This is the main obstacle we see in scaling this approach to larger classes and to MOOCs. We are currently exploring several approaches for addressing this problem: 1) Letting students send hints to their peers. 2) Creating hint libraries. 3) Using machine learning methods to automate the process of mapping student mistakes to the most relevant hint.
{"title":"A system for sending the right hint at the right time","authors":"Matthew Elkherj, Y. Freund","doi":"10.1145/2556325.2567864","DOIUrl":"https://doi.org/10.1145/2556325.2567864","url":null,"abstract":"Hints are sometimes used in online learning system to help students when they are having difficulties. However, in all of the systems we are aware of, the hints are fixed ahead of time and do not depend on the unsuccessful attempts the student has already made. This severely limits the effectiveness of the hints. We have developed an alternative system for giving hints to students. The main difference is that the system allows an instructor to send a hint to a student after the student has made several attempts to solve the problem and failed. After analyzing the student's mistakes, the instructor is better able to understand the problem in the student's thinking and send them a more helpful hint. We have deployed this system in a probability and statistics course with 176 students. We have demonstrated the superiority of the new hints methodology over the traditional one. The limiting factor on the effectiveness of our system is the amount of manual labor required to send each hint. This is the main obstacle we see in scaling this approach to larger classes and to MOOCs. We are currently exploring several approaches for addressing this problem: 1) Letting students send hints to their peers. 2) Creating hint libraries. 3) Using machine learning methods to automate the process of mapping student mistakes to the most relevant hint.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86500861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Champaign, Kimberly F. Colvin, A. Liu, Colin Fredericks, Daniel T. Seaton, David E. Pritchard
Because MOOCs offer complete logs of student activities for each student there is hope that it may be possible to find out which activities are the most useful for learning. We start this quest by examining correlations between time spent on specific course resources and various measures of student performance: score on assessments, skill as defined by Item Response Theory, improvement in skill over the period of the course, and conceptual improvement as measured by a pre-post test. We study two MOOCs offered on edX.org by MIT faculty: Circuits and Electronics (6.002x) and Mechanics Review (8.MReV). Surprisingly, we find strong negative correlations in 6.002x between student skill and resource use; we attribute these findings to the fact that students with higher initial skills can do the exercises faster and with less time spent on instructional resources. We find weak or slightly negative correlations between relative improvement and resource use in 6.002x. The correlations with learning are stronger for conceptual knowledge in 8.MReV than with relative improvement, but similar for all course activities (except that eText checkpoint questions correlate more strongly with relative improvement). Clearly, the wide distribution of demographics and initial skill in MOOCs challenges us to isolate the habits of learning and resource use that correlate with learning for different students.
{"title":"Correlating skill and improvement in 2 MOOCs with a student's time on tasks","authors":"J. Champaign, Kimberly F. Colvin, A. Liu, Colin Fredericks, Daniel T. Seaton, David E. Pritchard","doi":"10.1145/2556325.2566250","DOIUrl":"https://doi.org/10.1145/2556325.2566250","url":null,"abstract":"Because MOOCs offer complete logs of student activities for each student there is hope that it may be possible to find out which activities are the most useful for learning. We start this quest by examining correlations between time spent on specific course resources and various measures of student performance: score on assessments, skill as defined by Item Response Theory, improvement in skill over the period of the course, and conceptual improvement as measured by a pre-post test. We study two MOOCs offered on edX.org by MIT faculty: Circuits and Electronics (6.002x) and Mechanics Review (8.MReV). Surprisingly, we find strong negative correlations in 6.002x between student skill and resource use; we attribute these findings to the fact that students with higher initial skills can do the exercises faster and with less time spent on instructional resources. We find weak or slightly negative correlations between relative improvement and resource use in 6.002x. The correlations with learning are stronger for conceptual knowledge in 8.MReV than with relative improvement, but similar for all course activities (except that eText checkpoint questions correlate more strongly with relative improvement). Clearly, the wide distribution of demographics and initial skill in MOOCs challenges us to isolate the habits of learning and resource use that correlate with learning for different students.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89793335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengzheng Xu, Dan Goldwasser, B. Bederson, Jimmy J. Lin
We use visual analytics to explore participation in five MOOCs at the University of Maryland. In some of these courses, our analysis reveals interesting clustering patterns of student behavior. For other courses, visualizations provide "color" to help us better understand the range of student behavior.
我们使用可视化分析来调查马里兰大学(University of Maryland)五个mooc的参与情况。在其中一些课程中,我们的分析揭示了学生行为的有趣聚类模式。对于其他课程,可视化提供了“颜色”来帮助我们更好地理解学生行为的范围。
{"title":"Visual analytics of MOOCs at maryland","authors":"Zhengzheng Xu, Dan Goldwasser, B. Bederson, Jimmy J. Lin","doi":"10.1145/2556325.2567878","DOIUrl":"https://doi.org/10.1145/2556325.2567878","url":null,"abstract":"We use visual analytics to explore participation in five MOOCs at the University of Maryland. In some of these courses, our analysis reveals interesting clustering patterns of student behavior. For other courses, visualizations provide \"color\" to help us better understand the range of student behavior.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76919124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Rus, D. Stefanescu, Nobal B. Niraula, A. Graesser
We present an overview of the design of a conversational intelligent tutoring system, called DeepTutor, based on the framework of Learning Progressions. Learning Progressions capture students' successful paths towards mastery. The assumption of the proposed tutor is that by guiding instruction based on Learning Progressions, the system will be more effective (and efficient for that matter).
{"title":"DeepTutor: towards macro- and micro-adaptive conversational intelligent tutoring at scale","authors":"V. Rus, D. Stefanescu, Nobal B. Niraula, A. Graesser","doi":"10.1145/2556325.2567885","DOIUrl":"https://doi.org/10.1145/2556325.2567885","url":null,"abstract":"We present an overview of the design of a conversational intelligent tutoring system, called DeepTutor, based on the framework of Learning Progressions. Learning Progressions capture students' successful paths towards mastery. The assumption of the proposed tutor is that by guiding instruction based on Learning Progressions, the system will be more effective (and efficient for that matter).","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86038844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Video lectures are nowadays widely used by growing numbers of learners all over the world. Nevertheless, learners' interactions with the videos are not readily available, because online video platforms do not share them. In this paper, we present an open-source video learning analytics system, which is also available as a free service to researchers. Our system facilitates the analysis of video learning behavior by capturing learners' interactions with the video player (e.g, seek/scrub, play, pause). In an empirical user study, we captured hundreds of user interactions with the video player by analyzing the interactions as a learner activity time series. We found that learners employed the replaying activity to retrieve the video segments that contained the answers to the survey questions. The above findings indicate the potential of video analytics to represent learner behavior. Further research, should be able to elaborate on learner behavior by collecting large-scale data. In this way, the producers of online video pedagogy will be able to understand the use of this emerging medium and proceed with the appropriate amendments to the current video-based learning systems and practices.
{"title":"Open system for video learning analytics","authors":"K. Chorianopoulos, M. Giannakos, N. Chrisochoides","doi":"10.1145/2556325.2567855","DOIUrl":"https://doi.org/10.1145/2556325.2567855","url":null,"abstract":"Video lectures are nowadays widely used by growing numbers of learners all over the world. Nevertheless, learners' interactions with the videos are not readily available, because online video platforms do not share them. In this paper, we present an open-source video learning analytics system, which is also available as a free service to researchers. Our system facilitates the analysis of video learning behavior by capturing learners' interactions with the video player (e.g, seek/scrub, play, pause). In an empirical user study, we captured hundreds of user interactions with the video player by analyzing the interactions as a learner activity time series. We found that learners employed the replaying activity to retrieve the video segments that contained the answers to the survey questions. The above findings indicate the potential of video analytics to represent learner behavior. Further research, should be able to elaborate on learner behavior by collecting large-scale data. In this way, the producers of online video pedagogy will be able to understand the use of this emerging medium and proceed with the appropriate amendments to the current video-based learning systems and practices.","PeriodicalId":20830,"journal":{"name":"Proceedings of the first ACM conference on Learning @ scale conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81841172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}