D. Clow, Rebecca Ferguson, Kirsty Kitto, Y. Cho, Mike Sharkey, C. Aguerrebere
The 2nd LAK Failathon will build on the successful event in 2016 and extend the workshop beyond discussing individual experiences of failure to exploring how the field can improve, particularly regarding the creation and use of evidence. Failure in research is an increasingly hot topic, with high-profile crises of confidence in the published research literature in medicine and psychology. Among the major factors in this research crisis are the many incentives to report and publish only positive findings. These incentives prevent the field in general from learning from negative findings, and almost entirely preclude the publication of mistakes and errors. Thus providing an alternative forum for practitioners and researchers to learn from each other's failures can be very productive. The first LAK Failathon, held in 2016, provided just such an opportunity for researchers and practitioners to share their failures and negative findings in a lower-stakes environment, to help participants learn from each other's mistakes. It was very successful, and there was strong support for running it as an annual event. This workshop will build on that success, with twin objectives to provide an environment for individuals to learn from each other's failures, and also to co-develop plans for how we as a field can better build and deploy our evidence base.
{"title":"Beyond failure: the 2nd LAK Failathon","authors":"D. Clow, Rebecca Ferguson, Kirsty Kitto, Y. Cho, Mike Sharkey, C. Aguerrebere","doi":"10.1145/3027385.3029429","DOIUrl":"https://doi.org/10.1145/3027385.3029429","url":null,"abstract":"The 2nd LAK Failathon will build on the successful event in 2016 and extend the workshop beyond discussing individual experiences of failure to exploring how the field can improve, particularly regarding the creation and use of evidence. Failure in research is an increasingly hot topic, with high-profile crises of confidence in the published research literature in medicine and psychology. Among the major factors in this research crisis are the many incentives to report and publish only positive findings. These incentives prevent the field in general from learning from negative findings, and almost entirely preclude the publication of mistakes and errors. Thus providing an alternative forum for practitioners and researchers to learn from each other's failures can be very productive. The first LAK Failathon, held in 2016, provided just such an opportunity for researchers and practitioners to share their failures and negative findings in a lower-stakes environment, to help participants learn from each other's mistakes. It was very successful, and there was strong support for running it as an annual event. This workshop will build on that success, with twin objectives to provide an environment for individuals to learn from each other's failures, and also to co-develop plans for how we as a field can better build and deploy our evidence base.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117154999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In online learning environments, learners are often required to be more autonomous in their approach to learning. In scaled online learning environments, like Massive Open Online Courses (MOOCs), there are differences in the ability of learners to access teachers and peers to get help with their study than in more traditional educational environments. This exploratory study examines the help-seeking behaviour of learners across several MOOCs with different audiences and designs. Learning analytics techniques (e.g., dimension reduction with t-sne and clustering with affinity propagation) were applied to identify clusters and determine profiles of learners on the basis of their help-seeking behaviours. Five help-seeking learner profiles were identified which provide an insight into how learners' help-seeking behaviour relates to performance. The development of a more in-depth understanding of how learners seek help in large online learning environments is important to inform the way support for learners can be incorporated into the design and facilitation of online courses delivered at scale.
{"title":"Using learning analytics to explore help-seeking learner profiles in MOOCs","authors":"L. Corrin, P. D. Barba, Aneesha Bakharia","doi":"10.1145/3027385.3027448","DOIUrl":"https://doi.org/10.1145/3027385.3027448","url":null,"abstract":"In online learning environments, learners are often required to be more autonomous in their approach to learning. In scaled online learning environments, like Massive Open Online Courses (MOOCs), there are differences in the ability of learners to access teachers and peers to get help with their study than in more traditional educational environments. This exploratory study examines the help-seeking behaviour of learners across several MOOCs with different audiences and designs. Learning analytics techniques (e.g., dimension reduction with t-sne and clustering with affinity propagation) were applied to identify clusters and determine profiles of learners on the basis of their help-seeking behaviours. Five help-seeking learner profiles were identified which provide an insight into how learners' help-seeking behaviour relates to performance. The development of a more in-depth understanding of how learners seek help in large online learning environments is important to inform the way support for learners can be incorporated into the design and facilitation of online courses delivered at scale.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123767321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caitlin Mills, Igor Fridman, W. Soussou, Disha Waghray, A. Olney, S. D’Mello
Current learning technologies have no direct way to assess students' mental effort: are they in deep thought, struggling to overcome an impasse, or are they zoned out? To address this challenge, we propose the use of EEG-based cognitive load detectors during learning. Despite its potential, EEG has not yet been utilized as a way to optimize instructional strategies. We take an initial step towards this goal by assessing how experimentally manipulated (easy and difficult) sections of an intelligent tutoring system (ITS) influenced EEG-based estimates of students' cognitive load. We found a main effect of task difficulty on EEG-based cognitive load estimates, which were also correlated with learning performance. Our results show that EEG can be a viable source of data to model learners' mental states across a 90-minute session.
{"title":"Put your thinking cap on: detecting cognitive load using EEG during learning","authors":"Caitlin Mills, Igor Fridman, W. Soussou, Disha Waghray, A. Olney, S. D’Mello","doi":"10.1145/3027385.3027431","DOIUrl":"https://doi.org/10.1145/3027385.3027431","url":null,"abstract":"Current learning technologies have no direct way to assess students' mental effort: are they in deep thought, struggling to overcome an impasse, or are they zoned out? To address this challenge, we propose the use of EEG-based cognitive load detectors during learning. Despite its potential, EEG has not yet been utilized as a way to optimize instructional strategies. We take an initial step towards this goal by assessing how experimentally manipulated (easy and difficult) sections of an intelligent tutoring system (ITS) influenced EEG-based estimates of students' cognitive load. We found a main effect of task difficulty on EEG-based cognitive load estimates, which were also correlated with learning performance. Our results show that EEG can be a viable source of data to model learners' mental states across a 90-minute session.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128317965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Gibson, A. Aitken, Ágnes Sándor, S. B. Shum, Cherie Tsingos-Lucas, Simon Knight
Reflective writing can provide a powerful way for students to integrate professional experience and academic learning. However, writing reflectively requires high quality actionable feedback, which is time-consuming to provide at scale. This paper reports progress on the design, implementation, and validation of a Reflective Writing Analytics platform to provide actionable feedback within a tertiary authentic assessment context. The contributions are: (1) a new conceptual framework for reflective writing; (2) a computational approach to modelling reflective writing, deriving analytics, and providing feedback; (3) the pedagogical and user experience rationale for platform design decisions; and (4) a pilot in a student learning context, with preliminary data on educator and student acceptance, and the extent to which we can evidence that the software provided actionable feedback for reflective writing.
{"title":"Reflective writing analytics for actionable feedback","authors":"A. Gibson, A. Aitken, Ágnes Sándor, S. B. Shum, Cherie Tsingos-Lucas, Simon Knight","doi":"10.1145/3027385.3027436","DOIUrl":"https://doi.org/10.1145/3027385.3027436","url":null,"abstract":"Reflective writing can provide a powerful way for students to integrate professional experience and academic learning. However, writing reflectively requires high quality actionable feedback, which is time-consuming to provide at scale. This paper reports progress on the design, implementation, and validation of a Reflective Writing Analytics platform to provide actionable feedback within a tertiary authentic assessment context. The contributions are: (1) a new conceptual framework for reflective writing; (2) a computational approach to modelling reflective writing, deriving analytics, and providing feedback; (3) the pedagogical and user experience rationale for platform design decisions; and (4) a pilot in a student learning context, with preliminary data on educator and student acceptance, and the extent to which we can evidence that the software provided actionable feedback for reflective writing.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124195437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Herodotou, B. Rienties, Avinash Boroowa, Z. Zdráhal, Martin Hlosta, G. Naydenova
In this paper, we describe a large-scale study about the use of predictive learning analytics data with 240 teachers in 10 modules at a distance learning higher education institution. The aim of the study was to illuminate teachers' uses and practices of predictive data, in particular identify how predictive data was used to support students at risk of not completing or failing a module. Data were collected from statistical analysis of 17,033 students' performance by the end of the intervention, teacher usage statistics, and five individual semi-structured interviews with teachers. Findings revealed that teachers endorse the use of predictive data to support their practice yet in diverse ways and raised the need for devising appropriate intervention strategies to support students at risk.
{"title":"Implementing predictive learning analytics on a large scale: the teacher's perspective","authors":"C. Herodotou, B. Rienties, Avinash Boroowa, Z. Zdráhal, Martin Hlosta, G. Naydenova","doi":"10.1145/3027385.3027397","DOIUrl":"https://doi.org/10.1145/3027385.3027397","url":null,"abstract":"In this paper, we describe a large-scale study about the use of predictive learning analytics data with 240 teachers in 10 modules at a distance learning higher education institution. The aim of the study was to illuminate teachers' uses and practices of predictive data, in particular identify how predictive data was used to support students at risk of not completing or failing a module. Data were collected from statistical analysis of 17,033 students' performance by the end of the intervention, teacher usage statistics, and five individual semi-structured interviews with teachers. Findings revealed that teachers endorse the use of predictive data to support their practice yet in diverse ways and raised the need for devising appropriate intervention strategies to support students at risk.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125708455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. D. Mitri, Maren Scheffel, H. Drachsler, D. Börner, Stefaan Ternier, M. Specht
Learning Pulse explores whether using a machine learning approach on multimodal data such as heart rate, step count, weather condition and learning activity can be used to predict learning performance in self-regulated learning settings. An experiment was carried out lasting eight weeks involving PhD students as participants, each of them wearing a Fitbit HR wristband and having their application on their computer recorded during their learning and working activities throughout the day. A software infrastructure for collecting multimodal learning experiences was implemented. As part of this infrastructure a Data Processing Application was developed to pre-process, analyse and generate predictions to provide feedback to the users about their learning performance. Data from different sources were stored using the xAPI standard into a cloud-based Learning Record Store. The participants of the experiment were asked to rate their learning experience through an Activity Rating Tool indicating their perceived level of productivity, stress, challenge and abilities. These self-reported performance indicators were used as markers to train a Linear Mixed Effect Model to generate learner-specific predictions of the learning performance. We discuss the advantages and the limitations of the used approach, highlighting further development points.
{"title":"Learning pulse: a machine learning approach for predicting performance in self-regulated learning using multimodal data","authors":"D. D. Mitri, Maren Scheffel, H. Drachsler, D. Börner, Stefaan Ternier, M. Specht","doi":"10.1145/3027385.3027447","DOIUrl":"https://doi.org/10.1145/3027385.3027447","url":null,"abstract":"Learning Pulse explores whether using a machine learning approach on multimodal data such as heart rate, step count, weather condition and learning activity can be used to predict learning performance in self-regulated learning settings. An experiment was carried out lasting eight weeks involving PhD students as participants, each of them wearing a Fitbit HR wristband and having their application on their computer recorded during their learning and working activities throughout the day. A software infrastructure for collecting multimodal learning experiences was implemented. As part of this infrastructure a Data Processing Application was developed to pre-process, analyse and generate predictions to provide feedback to the users about their learning performance. Data from different sources were stored using the xAPI standard into a cloud-based Learning Record Store. The participants of the experiment were asked to rate their learning experience through an Activity Rating Tool indicating their perceived level of productivity, stress, challenge and abilities. These self-reported performance indicators were used as markers to train a Linear Mixed Effect Model to generate learner-specific predictions of the learning performance. We discuss the advantages and the limitations of the used approach, highlighting further development points.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"242 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131649274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heeryung Choi, Christopher A. Brooks, Kevyn Collins-Thompson
In this work we investigate the use of deep learning for text analysis to measure elements of student thinking related to issues of privilege, oppression, diversity and social justice. We leverage historical expert annotations as well as a large lexical model to create a more generalizable vocabulary for identifying these characteristics in short student writing. We demonstrate the feasibility of this approach, and identify further areas for research.
{"title":"What does student writing tell us about their thinking on social justice?","authors":"Heeryung Choi, Christopher A. Brooks, Kevyn Collins-Thompson","doi":"10.1145/3027385.3029477","DOIUrl":"https://doi.org/10.1145/3027385.3029477","url":null,"abstract":"In this work we investigate the use of deep learning for text analysis to measure elements of student thinking related to issues of privilege, oppression, diversity and social justice. We leverage historical expert annotations as well as a large lexical model to create a more generalizable vocabulary for identifying these characteristics in short student writing. We demonstrate the feasibility of this approach, and identify further areas for research.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133211789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guanliang Chen, Dan Davis, Markus Krause, C. Hauff, G. Houben
Massive Open Online Courses (MOOCs) aim to educate the world, especially learners from developing countries. While MOOCs are certainly available to the masses, they are not yet fully accessible. Although all course content is just clicks away, deeply engaging with a MOOC requires a substantial time commitment, which frequently becomes a barrier to success. To mitigate the time required to learn from a MOOC, we here introduce a design that enables learners to earn money by applying what they learn in the course to real-world marketplace tasks. We present a Paid Task Recommender System (Rec-$ys), which automatically recommends course-relevant tasks to learners as drawn from online freelance platforms. Rec-$ys has been deployed into a data analysis MOOC and is currently under evaluation.
{"title":"Buying time: enabling learners to become earners with a real-world paid task recommender system","authors":"Guanliang Chen, Dan Davis, Markus Krause, C. Hauff, G. Houben","doi":"10.1145/3027385.3029469","DOIUrl":"https://doi.org/10.1145/3027385.3029469","url":null,"abstract":"Massive Open Online Courses (MOOCs) aim to educate the world, especially learners from developing countries. While MOOCs are certainly available to the masses, they are not yet fully accessible. Although all course content is just clicks away, deeply engaging with a MOOC requires a substantial time commitment, which frequently becomes a barrier to success. To mitigate the time required to learn from a MOOC, we here introduce a design that enables learners to earn money by applying what they learn in the course to real-world marketplace tasks. We present a Paid Task Recommender System (Rec-$ys), which automatically recommends course-relevant tasks to learners as drawn from online freelance platforms. Rec-$ys has been deployed into a data analysis MOOC and is currently under evaluation.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134490808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simone Kopeinik, E. Lex, Paul Seitlinger, D. Albert, Tobias Ley
In online social learning environments, tagging has demonstrated its potential to facilitate search, to improve recommendations and to foster reflection and learning.Studies have shown that shared understanding needs to be established in the group as a prerequisite for learning. We hypothesise that this can be fostered through tag recommendation strategies that contribute to semantic stabilization. In this study, we investigate the application of two tag recommenders that are inspired by models of human memory: (i) the base-level learning equation BLL and (ii) Minerva. BLL models the frequency and recency of tag use while Minerva is based on frequency of tag use and semantic context. We test the impact of both tag recommenders on semantic stabilization in an online study with 56 students completing a group-based inquiry learning project in school. We find that displaying tags from other group members contributes significantly to semantic stabilization in the group, as compared to a strategy where tags from the students' individual vocabularies are used. Testing for the accuracy of the different recommenders revealed that algorithms using frequency counts such as BLL performed better when individual tags were recommended. When group tags were recommended, the Minerva algorithm performed better. We conclude that tag recommenders, exposing learners to each other's tag choices by simulating search processes on learners' semantic memory structures, show potential to support semantic stabilization and thus, inquiry-based learning in groups.
{"title":"Supporting collaborative learning with tag recommendations: a real-world study in an inquiry-based classroom project","authors":"Simone Kopeinik, E. Lex, Paul Seitlinger, D. Albert, Tobias Ley","doi":"10.1145/3027385.3027421","DOIUrl":"https://doi.org/10.1145/3027385.3027421","url":null,"abstract":"In online social learning environments, tagging has demonstrated its potential to facilitate search, to improve recommendations and to foster reflection and learning.Studies have shown that shared understanding needs to be established in the group as a prerequisite for learning. We hypothesise that this can be fostered through tag recommendation strategies that contribute to semantic stabilization. In this study, we investigate the application of two tag recommenders that are inspired by models of human memory: (i) the base-level learning equation BLL and (ii) Minerva. BLL models the frequency and recency of tag use while Minerva is based on frequency of tag use and semantic context. We test the impact of both tag recommenders on semantic stabilization in an online study with 56 students completing a group-based inquiry learning project in school. We find that displaying tags from other group members contributes significantly to semantic stabilization in the group, as compared to a strategy where tags from the students' individual vocabularies are used. Testing for the accuracy of the different recommenders revealed that algorithms using frequency counts such as BLL performed better when individual tags were recommended. When group tags were recommended, the Minerva algorithm performed better. We conclude that tag recommenders, exposing learners to each other's tag choices by simulating search processes on learners' semantic memory structures, show potential to support semantic stabilization and thus, inquiry-based learning in groups.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"43 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133846621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emotions form an integral part of our cognitive function. Past research has demonstrated conclusive associations between emotions and learning achievement [7, 26, 27]. This paper used a person-centered approach to explore students' (N = 65) facial behavior, emotions, learner traits and learning. An automatic facial expression recognition system was used to detect both middle school and university students' real-time facial movements while they learned scientific tasks in a 3D narrative video game. The results indicated a strong statistical relationship between three specific facial movements (i.e., outer brow raising, lip tightening and lip pressing), student self-regulatory learning strategy and learning performance. Outer brow raising (AU2) had strong predictive power when a student is confronted with obstacles and does not know how to proceed. Both lip tightening and pressing (AU23 and AU24) were predictive when a student engaged in a task that requires a deep level of incoming information processing and short memory activation. The findings also suggested a correlational relationship between student self-regulatory learning strategy use and neutral state. It is hoped that this study will provide empirical evidence for helping us develop a deeper understanding of the relationship between facial behavior and complex learning especially in educational games.
{"title":"Person-centered approach to explore learner's emotionality in learning within a 3D narrative game","authors":"Zhenhua Xu, Earl Woodruff","doi":"10.1145/3027385.3027432","DOIUrl":"https://doi.org/10.1145/3027385.3027432","url":null,"abstract":"Emotions form an integral part of our cognitive function. Past research has demonstrated conclusive associations between emotions and learning achievement [7, 26, 27]. This paper used a person-centered approach to explore students' (N = 65) facial behavior, emotions, learner traits and learning. An automatic facial expression recognition system was used to detect both middle school and university students' real-time facial movements while they learned scientific tasks in a 3D narrative video game. The results indicated a strong statistical relationship between three specific facial movements (i.e., outer brow raising, lip tightening and lip pressing), student self-regulatory learning strategy and learning performance. Outer brow raising (AU2) had strong predictive power when a student is confronted with obstacles and does not know how to proceed. Both lip tightening and pressing (AU23 and AU24) were predictive when a student engaged in a task that requires a deep level of incoming information processing and short memory activation. The findings also suggested a correlational relationship between student self-regulatory learning strategy use and neutral state. It is hoped that this study will provide empirical evidence for helping us develop a deeper understanding of the relationship between facial behavior and complex learning especially in educational games.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116949108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}