People in general and students in particular have a tendency to misinterpret their own abilities. Some tend to underestimate their skills, while others tend to overestimate them. This paper investigates the degree to which metacognition is asymmetric in real-world learning and examines the change of a students' confidence over the course of a semester and its impact on the students' academic performance. Our findings, conducted using 129,644 students learning in eight courses within the LearnSmart platform, indicate that poor or unrealistic metacognition is asymmetric. These students are biased in one direction: they are more likely to be overconfident than underconfident. Additionally, while the examination of the temporal aspects of confidence reveals no significant change throughout the semester, changes are more apparent in the first and the last few weeks of the course. More specifically, there is a sharp increase in underconfidence and a simultaneous decrease in realistic evaluation toward the end of the semester. Finally, both overconfidence and underconfidence seem to be correlated with students' overall course performance. An increase in overconfidence is related to higher overall performance, while an increase in underconfidence is associated with lower overall performance.
{"title":"Exploring the asymmetry of metacognition","authors":"Ani Aghababyan, N. Lewkow, R. Baker","doi":"10.1145/3027385.3027388","DOIUrl":"https://doi.org/10.1145/3027385.3027388","url":null,"abstract":"People in general and students in particular have a tendency to misinterpret their own abilities. Some tend to underestimate their skills, while others tend to overestimate them. This paper investigates the degree to which metacognition is asymmetric in real-world learning and examines the change of a students' confidence over the course of a semester and its impact on the students' academic performance. Our findings, conducted using 129,644 students learning in eight courses within the LearnSmart platform, indicate that poor or unrealistic metacognition is asymmetric. These students are biased in one direction: they are more likely to be overconfident than underconfident. Additionally, while the examination of the temporal aspects of confidence reveals no significant change throughout the semester, changes are more apparent in the first and the last few weeks of the course. More specifically, there is a sharp increase in underconfidence and a simultaneous decrease in realistic evaluation toward the end of the semester. Finally, both overconfidence and underconfidence seem to be correlated with students' overall course performance. An increase in overconfidence is related to higher overall performance, while an increase in underconfidence is associated with lower overall performance.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121153524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe a tablet application designed around an interactive game-based science museum exhibit. It is aimed to help provide museum docents useful information about the visitors' actions, in a way that is actionable, and enables docents to provide assistance and prompts to visitors that are more meaningful, compared to what they are typically able to do without this interface augmentation.
{"title":"What are visitors up to?: helping museum facilitators know what visitors are doing","authors":"Vishesh Kumar, Michael Tissenbaum, M. Berland","doi":"10.1145/3027385.3029456","DOIUrl":"https://doi.org/10.1145/3027385.3029456","url":null,"abstract":"In this paper, we describe a tablet application designed around an interactive game-based science museum exhibit. It is aimed to help provide museum docents useful information about the visitors' actions, in a way that is actionable, and enables docents to provide assistance and prompts to visitors that are more meaningful, compared to what they are typically able to do without this interface augmentation.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126232600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vitomir Kovanovíc, Srécko Joksimovíc, Philip Katerinopoulos, Charalampos Michail, George Siemens, D. Gašević
In 2011, the phenomenon of MOOCs had swept the world of education and put online education in the focus of the public discourse around the world. Although researchers were excited with the vast amounts of MOOC data being collected, the benefits of this data did not stand to the expectations due to several challenges. The analyses of MOOC data are very time-consuming and labor-intensive, and require and require a highly advanced set of technical skills, often not available to the education researchers. Because of this MOOC data analyses are rarely done before the courses end, limiting the potential of data to impact the student learning outcomes and experience. In this paper we introduce MOOCito (MOOC intervention tool), a user-friendly software platform for the analysis of MOOC data, that focuses on conducting data-informed instructional interventions and course experimentations. We cover important design principles behind MOOCito and provide an overview of the trends in MOOC research leading to its development. Although a work-in-progress, in this paper, we outline the prototype of MOOCito and the results of a user evaluation study that focused on system's perceived usability and ease-of-use. The results of the study are discussed, as well as their practical implications.
{"title":"Developing a MOOC experimentation platform: insights from a user study","authors":"Vitomir Kovanovíc, Srécko Joksimovíc, Philip Katerinopoulos, Charalampos Michail, George Siemens, D. Gašević","doi":"10.1145/3027385.3027398","DOIUrl":"https://doi.org/10.1145/3027385.3027398","url":null,"abstract":"In 2011, the phenomenon of MOOCs had swept the world of education and put online education in the focus of the public discourse around the world. Although researchers were excited with the vast amounts of MOOC data being collected, the benefits of this data did not stand to the expectations due to several challenges. The analyses of MOOC data are very time-consuming and labor-intensive, and require and require a highly advanced set of technical skills, often not available to the education researchers. Because of this MOOC data analyses are rarely done before the courses end, limiting the potential of data to impact the student learning outcomes and experience. In this paper we introduce MOOCito (MOOC intervention tool), a user-friendly software platform for the analysis of MOOC data, that focuses on conducting data-informed instructional interventions and course experimentations. We cover important design principles behind MOOCito and provide an overview of the trends in MOOC research leading to its development. Although a work-in-progress, in this paper, we outline the prototype of MOOCito and the results of a user evaluation study that focused on system's perceived usability and ease-of-use. The results of the study are discussed, as well as their practical implications.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115853875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An ongoing study is being run to ensure that the McGraw-Hill Education LearnSmart platform teaches students as efficiently as possible. The first step in doing so is to identify what Knowledge Components (KCs) exist in the content; while the content is tagged by experts, these tags need to be re-calibrated periodically. LearnSmart courses are organized into chapters corresponding to those found in a textbook; each chapter can have anywhere from about a hundred to a few thousand questions. The KC extraction algorithms proposed by Barnes [1] and Desmarais et al [3] are applied on a chapter-by-chapter basis. To assess the ability of each mined q matrix to describe the observed learning, the PFA model of Pavlik et al [4] is fitted to it and a cross-validated AUC is calculated. The models are assessed based on whether PFA's predictions of student correctness are accurate. Early results show that both algorithms do a reasonable job of describing student progress, but q matrices with very different numbers of KCs fit observed data similarly well. Consequently, further consideration is required before automated extraction is practical in this context.
{"title":"Mining knowledge components from many untagged questions","authors":"N. Zimmerman, R. Baker","doi":"10.1145/3027385.3029462","DOIUrl":"https://doi.org/10.1145/3027385.3029462","url":null,"abstract":"An ongoing study is being run to ensure that the McGraw-Hill Education LearnSmart platform teaches students as efficiently as possible. The first step in doing so is to identify what Knowledge Components (KCs) exist in the content; while the content is tagged by experts, these tags need to be re-calibrated periodically. LearnSmart courses are organized into chapters corresponding to those found in a textbook; each chapter can have anywhere from about a hundred to a few thousand questions. The KC extraction algorithms proposed by Barnes [1] and Desmarais et al [3] are applied on a chapter-by-chapter basis. To assess the ability of each mined q matrix to describe the observed learning, the PFA model of Pavlik et al [4] is fitted to it and a cross-validated AUC is calculated. The models are assessed based on whether PFA's predictions of student correctness are accurate. Early results show that both algorithms do a reasonable job of describing student progress, but q matrices with very different numbers of KCs fit observed data similarly well. Consequently, further consideration is required before automated extraction is practical in this context.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115528917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Cooper, Alan Berg, Niall Sclater, Tanya Dorey-Elias, Kirsty Kitto
The hackathon is intended to be a practical hands-on workshop involving participants from academia and commercial organizations with both technical and practitioner expertise. It will consider the outstanding challenge of visualizations which are effective for the intended audience: informing action, not likely to be misinterpreted, and embodying contextual appropriacy, etc. It will surface particular issues as workshop challenges and explore responses to these challenges as visualizations resting upon interoperability standards and API-oriented open architectures.
{"title":"LAK17 hackathon: getting the right information to the right people so they can take the right action","authors":"A. Cooper, Alan Berg, Niall Sclater, Tanya Dorey-Elias, Kirsty Kitto","doi":"10.1145/3027385.3029435","DOIUrl":"https://doi.org/10.1145/3027385.3029435","url":null,"abstract":"The hackathon is intended to be a practical hands-on workshop involving participants from academia and commercial organizations with both technical and practitioner expertise. It will consider the outstanding challenge of visualizations which are effective for the intended audience: informing action, not likely to be misinterpreted, and embodying contextual appropriacy, etc. It will surface particular issues as workshop challenges and explore responses to these challenges as visualizations resting upon interoperability standards and API-oriented open architectures.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114706891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sinem Aslan, Eda Okur, Nese Alyüz, Sinem Emine Mete, Ece Oktay, Ergin Utku Genc, Asli Arslan Esme
There are some implementations towards understanding students' emotional states through automated systems with machine learning models. However, generic AI models of emotions lack enough accuracy to autonomously and meaningfully trigger any interventions. Collecting self-labels from students as they assess their internal states can be a way to collect labeled subject specific data necessary to obtain personalized emotional engagement models. In this paper, we outline preliminary analysis on emotional self-labels collected from students while using a learning platform.
{"title":"Students' emotional self-labels for personalized models","authors":"Sinem Aslan, Eda Okur, Nese Alyüz, Sinem Emine Mete, Ece Oktay, Ergin Utku Genc, Asli Arslan Esme","doi":"10.1145/3027385.3029452","DOIUrl":"https://doi.org/10.1145/3027385.3029452","url":null,"abstract":"There are some implementations towards understanding students' emotional states through automated systems with machine learning models. However, generic AI models of emotions lack enough accuracy to autonomously and meaningfully trigger any interventions. Collecting self-labels from students as they assess their internal states can be a way to collect labeled subject specific data necessary to obtain personalized emotional engagement models. In this paper, we outline preliminary analysis on emotional self-labels collected from students while using a learning platform.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128233689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyu Fu, Atsushi Shimada, H. Ogata, Yuta Taniguchi, D. Suehiro
Many universities choose the C programming language (C) as the first one they teach their students, early on in their program. However, students often consider programming courses difficult, and these courses often have among the highest dropout rates of computer science courses offered. It is therefore critical to provide more effective instruction to help students understand the syntax of C and prevent them losing interest in programming. In addition, homework and paper-based exams are still the main assessment methods in the majority of classrooms. It is difficult for teachers to grasp students' learning situation due to the large amount of evaluation work. To facilitate teaching and learning of C, in this article we propose a system---LAPLE (Learning Analytics in Programming Language Education)---that provides a learning dashboard to capture the behavior of students in the classroom and identify the different difficulties faced by different students looking at different knowledge. With LAPLE, teachers may better grasp students' learning situation in real time and better improve educational materials using analysis results. For their part, novice undergraduate programmers may use LAPLE to locate syntax errors in C and get recommendations from educational materials on how to fix them.
{"title":"Real-time learning analytics for C programming language courses","authors":"Xinyu Fu, Atsushi Shimada, H. Ogata, Yuta Taniguchi, D. Suehiro","doi":"10.1145/3027385.3027407","DOIUrl":"https://doi.org/10.1145/3027385.3027407","url":null,"abstract":"Many universities choose the C programming language (C) as the first one they teach their students, early on in their program. However, students often consider programming courses difficult, and these courses often have among the highest dropout rates of computer science courses offered. It is therefore critical to provide more effective instruction to help students understand the syntax of C and prevent them losing interest in programming. In addition, homework and paper-based exams are still the main assessment methods in the majority of classrooms. It is difficult for teachers to grasp students' learning situation due to the large amount of evaluation work. To facilitate teaching and learning of C, in this article we propose a system---LAPLE (Learning Analytics in Programming Language Education)---that provides a learning dashboard to capture the behavior of students in the classroom and identify the different difficulties faced by different students looking at different knowledge. With LAPLE, teachers may better grasp students' learning situation in real time and better improve educational materials using analysis results. For their part, novice undergraduate programmers may use LAPLE to locate syntax errors in C and get recommendations from educational materials on how to fix them.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128829927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Koedinger, Ran Liu, John C. Stamper, Candace Thille, P. Pavlik
This workshop will explore community based repositories for educational data and analytic tools that are used to connect researchers and reduce the barriers to data sharing. Leading innovators in the field, as well as attendees, will identify and report on bottlenecks that remain toward our goal of a unified repository. We will discuss these as well as possible solutions. We will present LearnSphere, an NSF funded system that supports collaborating on and sharing a wide variety of educational data, learning analytics methods, and visualizations while maintaining confidentiality. We will then have hands-on sessions in which attendees have the opportunity to apply existing learning analytics workflows to their choice of educational datasets in the repository (using a simple drag-and-drop interface), add their own learning analytics workflows (requires very basic coding experience), or both. Leaders and attendees will then jointly discuss the unique benefits as well as the limitations of these solutions. Our goal is to create building blocks to allow researchers to integrate their data and analysis methods with others, in order to advance the future of learning science.
{"title":"Community based educational data repositories and analysis tools","authors":"K. Koedinger, Ran Liu, John C. Stamper, Candace Thille, P. Pavlik","doi":"10.1145/3027385.3029442","DOIUrl":"https://doi.org/10.1145/3027385.3029442","url":null,"abstract":"This workshop will explore community based repositories for educational data and analytic tools that are used to connect researchers and reduce the barriers to data sharing. Leading innovators in the field, as well as attendees, will identify and report on bottlenecks that remain toward our goal of a unified repository. We will discuss these as well as possible solutions. We will present LearnSphere, an NSF funded system that supports collaborating on and sharing a wide variety of educational data, learning analytics methods, and visualizations while maintaining confidentiality. We will then have hands-on sessions in which attendees have the opportunity to apply existing learning analytics workflows to their choice of educational datasets in the repository (using a simple drag-and-drop interface), add their own learning analytics workflows (requires very basic coding experience), or both. Leaders and attendees will then jointly discuss the unique benefits as well as the limitations of these solutions. Our goal is to create building blocks to allow researchers to integrate their data and analysis methods with others, in order to advance the future of learning science.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122665678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We discuss the application of a hand-drawn self-visualization approach to learner-data, to draw attention to the space of representational possibilities, the power of representation interactions, and the performativity of information representation.
{"title":"Dear learner: participatory visualisation of learning data for sensemaking","authors":"Simon Knight, T. Anderson, Kelly Tall","doi":"10.1145/3027385.3029443","DOIUrl":"https://doi.org/10.1145/3027385.3029443","url":null,"abstract":"We discuss the application of a hand-drawn self-visualization approach to learner-data, to draw attention to the space of representational possibilities, the power of representation interactions, and the performativity of information representation.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122763158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grouping students with similar past achievement together (tracking) might affect their reading achievement. Multilevel analyses of 208,057 fourth grade students in 40 countries showed that clustering students in schools by past achievement was linked to higher reading achievement, consistent with the benefits of customized, targeted instruction. Meanwhile, students had higher reading achievement with greater differences (variances) among classmates' past achievement, reading attitudes, or family SES; these results are consistent with the view that greater student differences yield more help opportunities (higher achievers help lower achievers, so that both learn), and foster learning from their different resources, attitudes and behaviors. Also, a student had higher reading achievement when classmates had more resources (SES, home educational resources, reading attitude, past achievement), suggesting that classmates shared their resources and helped one another. Modeling of non-linear relations and achievement subsamples of students supported the above interpretations. Principals can use these results and a simpler version of this methodology to re-allocate students and resources into different course sections at little cost to improve students' reading achievement.
{"title":"How to assign students into sections to raise learning","authors":"M. Chiu, B. Chow, S. Joh","doi":"10.1145/3027385.3027439","DOIUrl":"https://doi.org/10.1145/3027385.3027439","url":null,"abstract":"Grouping students with similar past achievement together (tracking) might affect their reading achievement. Multilevel analyses of 208,057 fourth grade students in 40 countries showed that clustering students in schools by past achievement was linked to higher reading achievement, consistent with the benefits of customized, targeted instruction. Meanwhile, students had higher reading achievement with greater differences (variances) among classmates' past achievement, reading attitudes, or family SES; these results are consistent with the view that greater student differences yield more help opportunities (higher achievers help lower achievers, so that both learn), and foster learning from their different resources, attitudes and behaviors. Also, a student had higher reading achievement when classmates had more resources (SES, home educational resources, reading attitude, past achievement), suggesting that classmates shared their resources and helped one another. Modeling of non-linear relations and achievement subsamples of students supported the above interpretations. Principals can use these results and a simpler version of this methodology to re-allocate students and resources into different course sections at little cost to improve students' reading achievement.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123176488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}