Stefan Slater, R. Baker, M. Almeda, Alex J. Bowers, N. Heffernan
Student knowledge modeling is an important part of modern personalized learning systems, but typically relies upon valid models of the structure of the content and skill in a domain. These models are often developed through expert tagging of skills to items. However, content creators in crowdsourced personalized learning systems often lack the time (and sometimes the domain knowledge) to tag skills themselves. Fully automated approaches that rely on the covariance of correctness on items can lead to effective skill-item mappings, but the resultant mappings are often difficult to interpret. In this paper we propose an alternate approach to automatically labeling skills in a crowdsourced personalized learning system using correlated topic modeling, a natural language processing approach, to analyze the linguistic content of mathematics problems. We find a range of potentially meaningful and useful topics within the context of the ASSISTments system for mathematics problem-solving.
{"title":"Using correlational topic modeling for automated topic identification in intelligent tutoring systems","authors":"Stefan Slater, R. Baker, M. Almeda, Alex J. Bowers, N. Heffernan","doi":"10.1145/3027385.3027438","DOIUrl":"https://doi.org/10.1145/3027385.3027438","url":null,"abstract":"Student knowledge modeling is an important part of modern personalized learning systems, but typically relies upon valid models of the structure of the content and skill in a domain. These models are often developed through expert tagging of skills to items. However, content creators in crowdsourced personalized learning systems often lack the time (and sometimes the domain knowledge) to tag skills themselves. Fully automated approaches that rely on the covariance of correctness on items can lead to effective skill-item mappings, but the resultant mappings are often difficult to interpret. In this paper we propose an alternate approach to automatically labeling skills in a crowdsourced personalized learning system using correlated topic modeling, a natural language processing approach, to analyze the linguistic content of mathematics problems. We find a range of potentially meaningful and useful topics within the context of the ASSISTments system for mathematics problem-solving.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"310 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122784562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a study of temporal analytics and discourse analysis of an online discussion, through investigation of a group of 13 in-service teachers and 2 instructors. A discussion forum consisting of 281 posts on an online collaborative learning environment was investigated. A text-mining tool was used to discover keywords from the discourse, and through social network analysis based on these keywords, a significant presence of relevant and promising ideas within discourse was revealed. However, uncovering the key ideas alone is insufficient to clearly explain students' level of understanding regarding the discussed topics. A more thorough analysis was thus performed by using temporal analytics with step-wise discourse analysis to trace the ideas and determine their impact on communal discourse. The results indicated that most ideas within the discourse could be traced to the origin of a set of improvable ideas, which impacted and also increased the community's level of interest in sharing and discussing ideas through discourse.
{"title":"Temporal analytics with discourse analysis: tracing ideas and impact on communal discourse","authors":"Vwen Yen Lee, S. Tan","doi":"10.1145/3027385.3027386","DOIUrl":"https://doi.org/10.1145/3027385.3027386","url":null,"abstract":"This paper presents a study of temporal analytics and discourse analysis of an online discussion, through investigation of a group of 13 in-service teachers and 2 instructors. A discussion forum consisting of 281 posts on an online collaborative learning environment was investigated. A text-mining tool was used to discover keywords from the discourse, and through social network analysis based on these keywords, a significant presence of relevant and promising ideas within discourse was revealed. However, uncovering the key ideas alone is insufficient to clearly explain students' level of understanding regarding the discussed topics. A more thorough analysis was thus performed by using temporal analytics with step-wise discourse analysis to trace the ideas and determine their impact on communal discourse. The results indicated that most ideas within the discourse could be traced to the origin of a set of improvable ideas, which impacted and also increased the community's level of interest in sharing and discussing ideas through discourse.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131271637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The following paper is a proof-of-concept demonstration of a novel Bayesian framework for making inferences about individual students and the context in which they are learning. It has implications for both efforts to automate personalized instruction and to probabilistically model educational context. By modelling students as Bayesian learners, individuals who weigh their prior belief against current circumstantial data to reach conclusions, it becomes possible to both generate estimates of performance and the impact of the educational environment in probabilistic terms. This framework is tested through a Bayesian algorithm that can be used to characterize student prior knowledge in course material and predict student performance. This is demonstrated using both simulated data. The algorithm generates estimates that behave qualitatively as expected on simulated data and predict student performance substantially better than chance. A discussion of the results and the conceptual benefits of the framework follow.
{"title":"Opportunities for personalization in modeling students as Bayesian learners","authors":"Charles Lang","doi":"10.1145/3027385.3027410","DOIUrl":"https://doi.org/10.1145/3027385.3027410","url":null,"abstract":"The following paper is a proof-of-concept demonstration of a novel Bayesian framework for making inferences about individual students and the context in which they are learning. It has implications for both efforts to automate personalized instruction and to probabilistically model educational context. By modelling students as Bayesian learners, individuals who weigh their prior belief against current circumstantial data to reach conclusions, it becomes possible to both generate estimates of performance and the impact of the educational environment in probabilistic terms. This framework is tested through a Bayesian algorithm that can be used to characterize student prior knowledge in course material and predict student performance. This is demonstrated using both simulated data. The algorithm generates estimates that behave qualitatively as expected on simulated data and predict student performance substantially better than chance. A discussion of the results and the conceptual benefits of the framework follow.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"744 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132705759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vitomir Kovanovíc, Srécko Joksimovíc, Oleksandra Poquet, T. Hennis, S. Dawson, D. Gašević
In this poster, we present the results of the study which examined the relationship between student differences in their use of the available technology and their perceived levels of cognitive presence within the MOOC context. The cognitive presence is a construct used to measure the level of practical inquiry in the Communities of Inquiry model. Our results revealed the existence of three clusters based on student technology use. The clusters significantly differed in terms of their levels of cognitive presence, most notably they differed on the levels of problem resolution.
{"title":"Understanding the relationship between technology use and cognitive presence in MOOCs","authors":"Vitomir Kovanovíc, Srécko Joksimovíc, Oleksandra Poquet, T. Hennis, S. Dawson, D. Gašević","doi":"10.1145/3027385.3029471","DOIUrl":"https://doi.org/10.1145/3027385.3029471","url":null,"abstract":"In this poster, we present the results of the study which examined the relationship between student differences in their use of the available technology and their perceived levels of cognitive presence within the MOOC context. The cognitive presence is a construct used to measure the level of practical inquiry in the Communities of Inquiry model. Our results revealed the existence of three clusters based on student technology use. The clusters significantly differed in terms of their levels of cognitive presence, most notably they differed on the levels of problem resolution.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133743683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent research using learning analytics data to explore student performance over the course of a term suggests that a substantial percentage of students who are classified as academically struggling manage to recover. In this study, we report the result of a hazard analysis based on students' behavioral engagement with different digital instructional technologies over the course of a semester. We observe substantially different adoption and use behavior between students who did and did not experience academic difficulty in the course. Students who experienced moderate academic difficulty benefited the most from using tools that helped them plan their study behaviors. Students who experienced more severe academic difficulty benefited from tools that helped them prepare for exams. We observed that students adopted most tools and system features before they experienced academic difficulty, and students who adopted early were more likely to recover.
{"title":"Don't call it a comeback: academic recovery and the timing of educational technology adoption","authors":"M. Brown, R. DeMonbrun, Stephanie D. Teasley","doi":"10.1145/3027385.3027393","DOIUrl":"https://doi.org/10.1145/3027385.3027393","url":null,"abstract":"Recent research using learning analytics data to explore student performance over the course of a term suggests that a substantial percentage of students who are classified as academically struggling manage to recover. In this study, we report the result of a hazard analysis based on students' behavioral engagement with different digital instructional technologies over the course of a semester. We observe substantially different adoption and use behavior between students who did and did not experience academic difficulty in the course. Students who experienced moderate academic difficulty benefited the most from using tools that helped them plan their study behaviors. Students who experienced more severe academic difficulty benefited from tools that helped them prepare for exams. We observed that students adopted most tools and system features before they experienced academic difficulty, and students who adopted early were more likely to recover.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132108846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite their importance for educational practice, reflective writings are still manually analysed and assessed, posing a constraint on the use of this educational technique. Recently, research started to investigate automated approaches for analysing reflective writing. Foundational to many automated approaches is the knowledge of words that are important for the genre. This research presents keywords that are specific to several categories of a reflective writing model. These keywords have been derived from eight datasets, which contain several thousand instances using the log-likelihood method. Both performance measures, the accuracy and the Cohen's κ, for these keywords were estimated with ten-fold cross validation. The results reached an accuracy of 0.78 on average for all eight categories and a fair to good interrater reliability for most categories even though it did not make use of any sophisticated rule-based mechanisms or machine learning approaches. This research contributes to the development of automated reflective writing analytics that are based on data-driven empirical foundations.
{"title":"Reflective writing analytics: empirically determined keywords of written reflection","authors":"T. Ullmann","doi":"10.1145/3027385.3027394","DOIUrl":"https://doi.org/10.1145/3027385.3027394","url":null,"abstract":"Despite their importance for educational practice, reflective writings are still manually analysed and assessed, posing a constraint on the use of this educational technique. Recently, research started to investigate automated approaches for analysing reflective writing. Foundational to many automated approaches is the knowledge of words that are important for the genre. This research presents keywords that are specific to several categories of a reflective writing model. These keywords have been derived from eight datasets, which contain several thousand instances using the log-likelihood method. Both performance measures, the accuracy and the Cohen's κ, for these keywords were estimated with ten-fold cross validation. The results reached an accuracy of 0.78 on average for all eight categories and a fair to good interrater reliability for most categories even though it did not make use of any sophisticated rule-based mechanisms or machine learning approaches. This research contributes to the development of automated reflective writing analytics that are based on data-driven empirical foundations.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134532392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wikiglass is a learning analytic tool for visualizing collaborative work on Wikis built by groups of secondary or primary school students. This poster presents new features of Wikiglass developed recently based on requests from teachers, including flexible selection of date range, revision network, and thinking order detection. Currently the new features are used and evaluated in two secondary schools in Hong Kong.
{"title":"New features in Wikiglass, a learning analytic tool for visualizing collaborative work on wikis","authors":"Xiao Hu, C. Yang, Chen Qiao, Xiaoyu Lu, S. Chu","doi":"10.1145/3027385.3029489","DOIUrl":"https://doi.org/10.1145/3027385.3029489","url":null,"abstract":"Wikiglass is a learning analytic tool for visualizing collaborative work on Wikis built by groups of secondary or primary school students. This poster presents new features of Wikiglass developed recently based on requests from teachers, including flexible selection of date range, revision network, and thinking order detection. Currently the new features are used and evaluated in two secondary schools in Hong Kong.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115812561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quality assurance in any organization is important for ensuring that service users are satisfied with the service offered. For higher education institutes, the use of service quality measures allows for ideological gaps to be both identified and resolved. The learning analytic community, however, has rarely addressed the concept of service quality. A potential outcome of this is the provision of a learning analytics service that only meets the expectations of certain stakeholders (e.g., managers), whilst overlooking those who are most important (e.g., students). In order to resolve this issue, we outline a framework and our current progress towards developing a scale to assess student expectations and perceptions of learning analytics as a service.
{"title":"What do students want?: towards an instrument for students' evaluation of quality of learning analytics services","authors":"A. Whitelock-Wainwright, D. Gašević, R. Tejeiro","doi":"10.1145/3027385.3027419","DOIUrl":"https://doi.org/10.1145/3027385.3027419","url":null,"abstract":"Quality assurance in any organization is important for ensuring that service users are satisfied with the service offered. For higher education institutes, the use of service quality measures allows for ideological gaps to be both identified and resolved. The learning analytic community, however, has rarely addressed the concept of service quality. A potential outcome of this is the provision of a learning analytics service that only meets the expectations of certain stakeholders (e.g., managers), whilst overlooking those who are most important (e.g., students). In order to resolve this issue, we outline a framework and our current progress towards developing a scale to assess student expectations and perceptions of learning analytics as a service.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132371051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The facilitation of interpersonal relationships within a respectful learning climate is an important aspect of teaching practice. However, in large-scale online contexts, such as MOOCs, the number of learners and highly asynchronous nature militates against the development of a sense of belonging and dyadic trust. Given these challenges, instead of conventional instruments that reflect learners' affective perceptions, we suggest a set of indicators that can be used to evaluate social activity in relation to the participation structure. These group-level indicators can then help teachers to gain insights into the evolution of social activity shaped by their facilitation choices. For this study, group-level indicators were derived from measuring information exchange activity between the returning MOOC posters. By conceptualizing this group as an identity-based community, we can apply exponential random graph modelling to explain the network's structure through the configurations of direct reciprocity, triadic-level exchange, and the effect of participants demonstrating super-posting behavior. The findings provide novel insights into network amplification, and highlight the differences between the courses with different facilitation strategies. Direct reciprocation was characteristic of non-facilitated groups. Exchange at the level of triads was more prominent in highly facilitated online communities with instructor's involvement. Super-posting activity was less pronounced in networks with higher triadic exchange, and more pronounced in networks with higher direct reciprocity.
{"title":"How effective is your facilitation?: group-level analytics of MOOC forums","authors":"Oleksandra Poquet, S. Dawson, Nia Dowell","doi":"10.1145/3027385.3027404","DOIUrl":"https://doi.org/10.1145/3027385.3027404","url":null,"abstract":"The facilitation of interpersonal relationships within a respectful learning climate is an important aspect of teaching practice. However, in large-scale online contexts, such as MOOCs, the number of learners and highly asynchronous nature militates against the development of a sense of belonging and dyadic trust. Given these challenges, instead of conventional instruments that reflect learners' affective perceptions, we suggest a set of indicators that can be used to evaluate social activity in relation to the participation structure. These group-level indicators can then help teachers to gain insights into the evolution of social activity shaped by their facilitation choices. For this study, group-level indicators were derived from measuring information exchange activity between the returning MOOC posters. By conceptualizing this group as an identity-based community, we can apply exponential random graph modelling to explain the network's structure through the configurations of direct reciprocity, triadic-level exchange, and the effect of participants demonstrating super-posting behavior. The findings provide novel insights into network amplification, and highlight the differences between the courses with different facilitation strategies. Direct reciprocation was characteristic of non-facilitated groups. Exchange at the level of triads was more prominent in highly facilitated online communities with instructor's involvement. Super-posting activity was less pronounced in networks with higher triadic exchange, and more pronounced in networks with higher direct reciprocity.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114896523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicholas Diana, Michael Eagle, John C. Stamper, Shuchi Grover, M. Bienkowski, Satabdi Basu
Many introductory programming environments generate a large amount of log data, but making insights from these data accessible to instructors remains a challenge. This research demonstrates that student outcomes can be accurately predicted from student program states at various time points throughout the course, and integrates the resulting predictive models into an instructor dashboard. The effectiveness of the dashboard is evaluated by measuring how well the dashboard analytics correctly suggest that the instructor help students classified as most in need. Finally, we describe a method of matching low-performing students with high-performing peer tutors, and show that the inclusion of peer tutors not only increases the amount of help given, but the consistency of help availability as well.
{"title":"An instructor dashboard for real-time analytics in interactive programming assignments","authors":"Nicholas Diana, Michael Eagle, John C. Stamper, Shuchi Grover, M. Bienkowski, Satabdi Basu","doi":"10.1145/3027385.3027441","DOIUrl":"https://doi.org/10.1145/3027385.3027441","url":null,"abstract":"Many introductory programming environments generate a large amount of log data, but making insights from these data accessible to instructors remains a challenge. This research demonstrates that student outcomes can be accurately predicted from student program states at various time points throughout the course, and integrates the resulting predictive models into an instructor dashboard. The effectiveness of the dashboard is evaluated by measuring how well the dashboard analytics correctly suggest that the instructor help students classified as most in need. Finally, we describe a method of matching low-performing students with high-performing peer tutors, and show that the inclusion of peer tutors not only increases the amount of help given, but the consistency of help availability as well.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116430356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}