Christopher A. Brooks, Craig D. S. Thompson, Stephanie D. Teasley
Demographics factors have been used successfully as predictors of student success in traditional higher education systems, but their relationship to achievement in MOOC environments has been largely untested. In this work we explore the predictive power of user demographics compared to learner interaction trace data generated by students in two MOOCs. We show that demographic information offers minimal predictive power compared to activity models, even when compared to models created very early on in the course before substantial interaction data has accrued.
{"title":"Who You Are or What You Do: Comparing the Predictive Power of Demographics vs. Activity Patterns in Massive Open Online Courses (MOOCs)","authors":"Christopher A. Brooks, Craig D. S. Thompson, Stephanie D. Teasley","doi":"10.1145/2724660.2728668","DOIUrl":"https://doi.org/10.1145/2724660.2728668","url":null,"abstract":"Demographics factors have been used successfully as predictors of student success in traditional higher education systems, but their relationship to achievement in MOOC environments has been largely untested. In this work we explore the predictive power of user demographics compared to learner interaction trace data generated by students in two MOOCs. We show that demographic information offers minimal predictive power compared to activity models, even when compared to models created very early on in the course before substantial interaction data has accrued.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73141605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An important part of learning is interactions with peers, mentors, teaching assistants and the instructor. Discussions and group work allow for interactive learning and deeper understanding of class concepts. Online learning environments struggle to replicate this process. This is especially true when the scale of an online class is increased. In order to address this issue a few MOOCs solicit teaching assistants to answer questions, and through their social position, help set academic standards in discussion forums. However, little is know about how different social roles influence the attribution of value to statements in these environments. This study demonstrates that the attitudes expressed by individuals in facilitating roles influence the acceptance of information shared in a discussion board setting.
{"title":"Are You Listening?: Social Roles and Perceived Value of Statements in Online Learning Communities","authors":"R. Shillair, Rick Wash","doi":"10.1145/2724660.2728697","DOIUrl":"https://doi.org/10.1145/2724660.2728697","url":null,"abstract":"An important part of learning is interactions with peers, mentors, teaching assistants and the instructor. Discussions and group work allow for interactive learning and deeper understanding of class concepts. Online learning environments struggle to replicate this process. This is especially true when the scale of an online class is increased. In order to address this issue a few MOOCs solicit teaching assistants to answer questions, and through their social position, help set academic standards in discussion forums. However, little is know about how different social roles influence the attribution of value to statements in these environments. This study demonstrates that the attitudes expressed by individuals in facilitating roles influence the acceptance of information shared in a discussion board setting.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79822826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew E. Krumm, C. D'Angelo, T. Podkul, Mingyu Feng, H. Yamada, Rachel Beattie, Heather Hough, Christopher A. Thorn
This paper argues that improving learning reliably and at scale requires a specific orientation toward measurement, understood broadly. Drawing on examples from a partnership between SRI International and The Carnegie Foundation for the Advancement of Teaching, this paper describes measures of student behaviors that are being used by researchers and instructors to improve learning environments at more than 50 community colleges and four-year universities for thousands of students.
{"title":"Practical Measures of Learning Behaviors","authors":"Andrew E. Krumm, C. D'Angelo, T. Podkul, Mingyu Feng, H. Yamada, Rachel Beattie, Heather Hough, Christopher A. Thorn","doi":"10.1145/2724660.2728685","DOIUrl":"https://doi.org/10.1145/2724660.2728685","url":null,"abstract":"This paper argues that improving learning reliably and at scale requires a specific orientation toward measurement, understood broadly. Drawing on examples from a partnership between SRI International and The Carnegie Foundation for the Advancement of Teaching, this paper describes measures of student behaviors that are being used by researchers and instructors to improve learning environments at more than 50 community colleges and four-year universities for thousands of students.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73256451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Koedinger, Ji Hee Kim, J. Z. Jia, Elizabeth Mclaughlin, Norman L. Bier
The printing press long ago and the computer today have made widespread access to information possible. Learning theorists have suggested, however, that mere information is a poor way to learn. Instead, more effective learning comes through doing. While the most popularized element of today's MOOCs are the video lectures, many MOOCs also include interactive activities that can afford learning by doing. This paper explores the learning benefits of the use of informational assets (e.g., videos and text) in MOOCs, versus the learning by doing opportunities that interactive activities provide. We find that students doing more activities learn more than students watching more videos or reading more pages. We estimate the learning benefit from extra doing (1 SD increase) to be more than six times that of extra watching or reading. Our data, from a psychology MOOC, is correlational in character, however we employ causal inference mechanisms to lend support for the claim that the associations we find are causal.
{"title":"Learning is Not a Spectator Sport: Doing is Better than Watching for Learning from a MOOC","authors":"K. Koedinger, Ji Hee Kim, J. Z. Jia, Elizabeth Mclaughlin, Norman L. Bier","doi":"10.1145/2724660.2724681","DOIUrl":"https://doi.org/10.1145/2724660.2724681","url":null,"abstract":"The printing press long ago and the computer today have made widespread access to information possible. Learning theorists have suggested, however, that mere information is a poor way to learn. Instead, more effective learning comes through doing. While the most popularized element of today's MOOCs are the video lectures, many MOOCs also include interactive activities that can afford learning by doing. This paper explores the learning benefits of the use of informational assets (e.g., videos and text) in MOOCs, versus the learning by doing opportunities that interactive activities provide. We find that students doing more activities learn more than students watching more videos or reading more pages. We estimate the learning benefit from extra doing (1 SD increase) to be more than six times that of extra watching or reading. Our data, from a psychology MOOC, is correlational in character, however we employ causal inference mechanisms to lend support for the claim that the associations we find are causal.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"128 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72546469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing datasets tell us only a partial story about the contextual factors that impact learners in Massive Online Open Courses (MOOCs). Information about race/ethnicity, education, and income helps us understand socioeconomic status, but such data is notoriously difficult to collect in an international context. Extant MOOC studies have not paid due attention to socioeconomic variables; they have either taken a U.S.-centric approach, ignored important country-specific dimensions of variables, or failed to ask about certain variables altogether, such as race/ethnicity. During a qualitative study of 24 self-regulated learners from population groups underrepresented in MOOCs, we piloted a short U.S.-centric demographic questionnaire. Preliminary results suggest that a large-scale survey designed for both cross-national and country-specific analyses would provide valuable information to MOOC researchers.
{"title":"Cultural Relevance in MOOCs: Asking About Socioeconomic Context","authors":"Anna Kasunic, Jessica Hammer, A. Ogan","doi":"10.1145/2724660.2728700","DOIUrl":"https://doi.org/10.1145/2724660.2728700","url":null,"abstract":"Existing datasets tell us only a partial story about the contextual factors that impact learners in Massive Online Open Courses (MOOCs). Information about race/ethnicity, education, and income helps us understand socioeconomic status, but such data is notoriously difficult to collect in an international context. Extant MOOC studies have not paid due attention to socioeconomic variables; they have either taken a U.S.-centric approach, ignored important country-specific dimensions of variables, or failed to ask about certain variables altogether, such as race/ethnicity. During a qualitative study of 24 self-regulated learners from population groups underrepresented in MOOCs, we piloted a short U.S.-centric demographic questionnaire. Preliminary results suggest that a large-scale survey designed for both cross-national and country-specific analyses would provide valuable information to MOOC researchers.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85343443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William Wu, C. Daskalakis, N. Kaashoek, Christos Tzamos, S. Weinberg
An efficient peer grading mechanism is proposed for grading the multitude of assignments in online courses. This novel approach is based on game theory and mechanism design. A set of assumptions and a mathematical model is ratified to simulate the dominant strategy behavior of students in a given mechanism. A benchmark function accounting for grade accuracy and workload is established to quantitatively compare effectiveness and scalability of various mechanisms. After multiple iterations of mechanisms under increasingly realistic assumptions, three are proposed: Calibration, Improved Calibration, and Deduction. The Calibration mechanism performs as predicted by game theory when tested in an online crowd-sourced experiment, but fails when students are assumed to communicate. The Improved Calibration mechanism addresses this assumption, but at the cost of more effort spent grading. The Deduction mechanism performs relatively well in the benchmark, outperforming the Calibration, Improved Calibration, traditional automated, and traditional peer grading systems. The mathematical model and benchmark opens the way for future derivative works to be performed and compared.
{"title":"Game Theory based Peer Grading Mechanisms for MOOCs","authors":"William Wu, C. Daskalakis, N. Kaashoek, Christos Tzamos, S. Weinberg","doi":"10.1145/2724660.2728676","DOIUrl":"https://doi.org/10.1145/2724660.2728676","url":null,"abstract":"An efficient peer grading mechanism is proposed for grading the multitude of assignments in online courses. This novel approach is based on game theory and mechanism design. A set of assumptions and a mathematical model is ratified to simulate the dominant strategy behavior of students in a given mechanism. A benchmark function accounting for grade accuracy and workload is established to quantitatively compare effectiveness and scalability of various mechanisms. After multiple iterations of mechanisms under increasingly realistic assumptions, three are proposed: Calibration, Improved Calibration, and Deduction. The Calibration mechanism performs as predicted by game theory when tested in an online crowd-sourced experiment, but fails when students are assumed to communicate. The Improved Calibration mechanism addresses this assumption, but at the cost of more effort spent grading. The Deduction mechanism performs relatively well in the benchmark, outperforming the Calibration, Improved Calibration, traditional automated, and traditional peer grading systems. The mathematical model and benchmark opens the way for future derivative works to be performed and compared.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"92 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81638946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Catherine M. Hicks, C. Fraser, Purvi Desai, Scott R. Klemmer
Research suggests that online peer review can provide critical help to learners who would otherwise not be given individualized feedback on their work. However, little is known about how different characteristics of review systems impact reviewers. This extended abstract presents preliminary results from an online experiment examining how explicit numeric ratings change peer reviews. A between-subject experiment found that peer reviewers who were asked to generate a numeric rating as well as general feedback gave significantly more explanations and made more positive comments compared with reviewers who were asked to give general feedback only. These exploratory findings suggest the need to further examine how online peer review systems' affordances can impact the reviews given in these systems.
{"title":"Do Numeric Ratings Impact Peer Reviewers?","authors":"Catherine M. Hicks, C. Fraser, Purvi Desai, Scott R. Klemmer","doi":"10.1145/2724660.2728693","DOIUrl":"https://doi.org/10.1145/2724660.2728693","url":null,"abstract":"Research suggests that online peer review can provide critical help to learners who would otherwise not be given individualized feedback on their work. However, little is known about how different characteristics of review systems impact reviewers. This extended abstract presents preliminary results from an online experiment examining how explicit numeric ratings change peer reviews. A between-subject experiment found that peer reviewers who were asked to generate a numeric rating as well as general feedback gave significantly more explanations and made more positive comments compared with reviewers who were asked to give general feedback only. These exploratory findings suggest the need to further examine how online peer review systems' affordances can impact the reviews given in these systems.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83968334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edward Cutrell, Jacki O 'neill, S. Bala, B. Nitish, A. Cross, Nakull Gupta, Viraj Kumar, W. Thies, Microsoft Research
Students in the developing world are frequently cited as being among the most important beneficiaries of online education initiatives such as massive open online courses (MOOCs). While some predict that online classrooms will replace physical classrooms, our experience suggests that blending online and in-person instruction is more likely to succeed in developing regions. However, very little research has actually been done on the effects of online education or blended learning in these environments. In this paper we describe a blended learning initiative that combines videos from a large online course with peer-led sessions for undergraduate technical education in India. We performed a randomized controlled trial (RCT) that indicates our intervention was associated with a small but significant improvement in performance on a summative exam. We discuss the results of the RCT and an ethnographic study of the intervention to make recommendations for future, scalable blended learning initiatives for places such as India.
{"title":"Blended Learning in Indian Colleges with Massively Empowered Classroom","authors":"Edward Cutrell, Jacki O 'neill, S. Bala, B. Nitish, A. Cross, Nakull Gupta, Viraj Kumar, W. Thies, Microsoft Research","doi":"10.1145/2724660.2724666","DOIUrl":"https://doi.org/10.1145/2724660.2724666","url":null,"abstract":"Students in the developing world are frequently cited as being among the most important beneficiaries of online education initiatives such as massive open online courses (MOOCs). While some predict that online classrooms will replace physical classrooms, our experience suggests that blending online and in-person instruction is more likely to succeed in developing regions. However, very little research has actually been done on the effects of online education or blended learning in these environments. In this paper we describe a blended learning initiative that combines videos from a large online course with peer-led sessions for undergraduate technical education in India. We performed a randomized controlled trial (RCT) that indicates our intervention was associated with a small but significant improvement in performance on a summative exam. We discuss the results of the RCT and an ethnographic study of the intervention to make recommendations for future, scalable blended learning initiatives for places such as India.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77842147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cody A. Coleman, Daniel T. Seaton, Isaac L. Chuang
Advances in open-online education have led to a dramatic increase in the size, diversity, and traceability of learner populations, offering tremendous opportunities to study detailed learning behavior of users around the world. This paper adapts the topic modeling approach of Latent Dirichlet Allocation (LDA) to uncover behavioral structure from student logs in a MITx Massive Open Online Course, 8.02x: Electricity and Magnetism. LDA is typically found in the field of natural language processing, where it identifies the latent topic structure within a collection of documents. However, this framework can be adapted for analysis of user-behavioral patterns by considering user interactions with courseware as a ``bag of interactions'' equivalent to the ``bag of words'' model found in topic modeling. By employing this representation, LDA forms probabilistic use cases that clusters students based on their behavior. Through the probability distributions associated with each use case, this approach provides an interpretable representation of user access patterns, while reducing the dimensionality of the data and improving accuracy. Using only the first week of logs, we can predict whether or not a student will earn a certificate with 0.81 ± 0.01 cross-validation accuracy. Thus, the method presented in this paper is a powerful tool in understanding user behavior and predicting outcomes.
{"title":"Probabilistic Use Cases: Discovering Behavioral Patterns for Predicting Certification","authors":"Cody A. Coleman, Daniel T. Seaton, Isaac L. Chuang","doi":"10.1145/2724660.2724662","DOIUrl":"https://doi.org/10.1145/2724660.2724662","url":null,"abstract":"Advances in open-online education have led to a dramatic increase in the size, diversity, and traceability of learner populations, offering tremendous opportunities to study detailed learning behavior of users around the world. This paper adapts the topic modeling approach of Latent Dirichlet Allocation (LDA) to uncover behavioral structure from student logs in a MITx Massive Open Online Course, 8.02x: Electricity and Magnetism. LDA is typically found in the field of natural language processing, where it identifies the latent topic structure within a collection of documents. However, this framework can be adapted for analysis of user-behavioral patterns by considering user interactions with courseware as a ``bag of interactions'' equivalent to the ``bag of words'' model found in topic modeling. By employing this representation, LDA forms probabilistic use cases that clusters students based on their behavior. Through the probability distributions associated with each use case, this approach provides an interpretable representation of user access patterns, while reducing the dimensionality of the data and improving accuracy. Using only the first week of logs, we can predict whether or not a student will earn a certificate with 0.81 ± 0.01 cross-validation accuracy. Thus, the method presented in this paper is a powerful tool in understanding user behavior and predicting outcomes.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83605809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Chudzicki, David E. Pritchard, Zhongzhou Chen
We report the one of the first applications of treatment/control group learning experiments in MOOCs. We have compared the efficacy of deliberate practice-practicing a key procedure repetitively-with traditional practice on "whole problems". Evaluating the learning using traditional whole problems we find that traditional practice outperforms drag and drop, which in turn outperforms multiple choice. In addition, we measured the amount of learning that occurs during a pretest administered in a MOOC environment that transfers to the same question if placed on the posttest. We place a limit on the amount of such transfer, which suggests that this type of learning effect is very weak compared to the learning observed throughout the entire course.
{"title":"Learning Experiments Using AB Testing at Scale","authors":"Christopher Chudzicki, David E. Pritchard, Zhongzhou Chen","doi":"10.1145/2724660.2728703","DOIUrl":"https://doi.org/10.1145/2724660.2728703","url":null,"abstract":"We report the one of the first applications of treatment/control group learning experiments in MOOCs. We have compared the efficacy of deliberate practice-practicing a key procedure repetitively-with traditional practice on \"whole problems\". Evaluating the learning using traditional whole problems we find that traditional practice outperforms drag and drop, which in turn outperforms multiple choice. In addition, we measured the amount of learning that occurs during a pretest administered in a MOOC environment that transfers to the same question if placed on the posttest. We place a limit on the amount of such transfer, which suggests that this type of learning effect is very weak compared to the learning observed throughout the entire course.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76041205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}