Simon Knight, L. Allen, Andrew Gibson, D. McNamara, S. B. Shum
There is untapped potential in achieving the full impact of learning analytics through the integration of tools into practical pedagogic contexts. To meet this potential, more work must be conducted to support educators in developing learning analytics literacy. The proposed workshop addresses this need by building capacity in the learning analytics community and developing an approach to resourcing for building 'writing analytics literacy'.
{"title":"Writing analytics literacy: bridging from research to practice","authors":"Simon Knight, L. Allen, Andrew Gibson, D. McNamara, S. B. Shum","doi":"10.1145/3027385.3029425","DOIUrl":"https://doi.org/10.1145/3027385.3029425","url":null,"abstract":"There is untapped potential in achieving the full impact of learning analytics through the integration of tools into practical pedagogic contexts. To meet this potential, more work must be conducted to support educators in developing learning analytics literacy. The proposed workshop addresses this need by building capacity in the learning analytics community and developing an approach to resourcing for building 'writing analytics literacy'.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123587129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Clow, Rebecca Ferguson, Kirsty Kitto, Y. Cho, Mike Sharkey, C. Aguerrebere
This poster will be a chance for a wider LAK audience to engage with the 2nd LAK Failathon workshop. Both of these will build on the successful Failathon event in 2016 and extend beyond discussing individual experiences of failure to exploring how the field can improve, particularly regarding the creation and use of evidence. Failure in research is an increasingly hot topic, with high-profile crises of confidence in the published research literature in medicine and psychology. Among the major factors in this research crisis are the many incentives to report and publish only positive findings. These incentives prevent the field in general from learning from negative findings, and almost entirely preclude the publication of mistakes and errors. Thus providing an alternative forum for practitioners and researchers to learn from each other's failures can be very productive. The first LAK Failathon, held in 2016, provided just such an opportunity for researchers and practitioners to share their failures and negative findings in a lower-stakes environment, to help participants learn from each other's mistakes. It was very successful, and there was strong support for running it as an annual event. The 2nd LAK Failathon workshop will build on that success, with twin objectives to provide an environment for individuals to learn from each other's failures, and also to co-develop plans for how we as a field can better build and deploy our evidence base. This poster is an opportunity for wider feedback on the plans developed in the workshop, with interactive use of sticky notes to add new ideas and coloured dots to illustrate prioritisation. This broadens the participant base in this important work, which should improve the quality of the plans and the commitment of the community to delivering them.
{"title":"Beyond failure: the 2nd LAK Failathon poster","authors":"D. Clow, Rebecca Ferguson, Kirsty Kitto, Y. Cho, Mike Sharkey, C. Aguerrebere","doi":"10.1145/3027385.3029447","DOIUrl":"https://doi.org/10.1145/3027385.3029447","url":null,"abstract":"This poster will be a chance for a wider LAK audience to engage with the 2nd LAK Failathon workshop. Both of these will build on the successful Failathon event in 2016 and extend beyond discussing individual experiences of failure to exploring how the field can improve, particularly regarding the creation and use of evidence. Failure in research is an increasingly hot topic, with high-profile crises of confidence in the published research literature in medicine and psychology. Among the major factors in this research crisis are the many incentives to report and publish only positive findings. These incentives prevent the field in general from learning from negative findings, and almost entirely preclude the publication of mistakes and errors. Thus providing an alternative forum for practitioners and researchers to learn from each other's failures can be very productive. The first LAK Failathon, held in 2016, provided just such an opportunity for researchers and practitioners to share their failures and negative findings in a lower-stakes environment, to help participants learn from each other's mistakes. It was very successful, and there was strong support for running it as an annual event. The 2nd LAK Failathon workshop will build on that success, with twin objectives to provide an environment for individuals to learn from each other's failures, and also to co-develop plans for how we as a field can better build and deploy our evidence base. This poster is an opportunity for wider feedback on the plans developed in the workshop, with interactive use of sticky notes to add new ideas and coloured dots to illustrate prioritisation. This broadens the participant base in this important work, which should improve the quality of the plans and the commitment of the community to delivering them.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123626723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Donnelly, Nathaniel Blanchard, A. Olney, Sean Kelly, M. Nystrand, S. D’Mello
We investigate automatic detection of teacher questions from audio recordings collected in live classrooms with the goal of providing automated feedback to teachers. Using a dataset of audio recordings from 11 teachers across 37 class sessions, we automatically segment the audio into individual teacher utterances and code each as containing a question or not. We train supervised machine learning models to detect the human-coded questions using high-level linguistic features extracted from automatic speech recognition (ASR) transcripts, acoustic and prosodic features from the audio recordings, as well as context features, such as timing and turn-taking dynamics. Models are trained and validated independently of the teacher to ensure generalization to new teachers. We are able to distinguish questions and non-questions with a weighted F1 score of 0.69. A comparison of the three feature sets indicates that a model using linguistic features outperforms those using acoustic-prosodic and context features for question detection, but the combination of features yields a 5% improvement in overall accuracy compared to linguistic features alone. We discuss applications for pedagogical research, teacher formative assessment, and teacher professional development.
{"title":"Words matter: automatic detection of teacher questions in live classroom discourse using linguistics, acoustics, and context","authors":"P. Donnelly, Nathaniel Blanchard, A. Olney, Sean Kelly, M. Nystrand, S. D’Mello","doi":"10.1145/3027385.3027417","DOIUrl":"https://doi.org/10.1145/3027385.3027417","url":null,"abstract":"We investigate automatic detection of teacher questions from audio recordings collected in live classrooms with the goal of providing automated feedback to teachers. Using a dataset of audio recordings from 11 teachers across 37 class sessions, we automatically segment the audio into individual teacher utterances and code each as containing a question or not. We train supervised machine learning models to detect the human-coded questions using high-level linguistic features extracted from automatic speech recognition (ASR) transcripts, acoustic and prosodic features from the audio recordings, as well as context features, such as timing and turn-taking dynamics. Models are trained and validated independently of the teacher to ensure generalization to new teachers. We are able to distinguish questions and non-questions with a weighted F1 score of 0.69. A comparison of the three feature sets indicates that a model using linguistic features outperforms those using acoustic-prosodic and context features for question detection, but the combination of features yields a 5% improvement in overall accuracy compared to linguistic features alone. We discuss applications for pedagogical research, teacher formative assessment, and teacher professional development.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125877026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clarissa Lau, Jeanne Sinclair, M. Taub, R. Azevedo, E. Jang
Self-regulated learning (SRL) is a process that highly fluctuates as students actively deploy their metacognitive and cognitive processes during learning. In this paper, we apply an extension of latent profiling, latent transition analysis (LTA), which investigates the longitudinal development of students' SRL latent class memberships over time. We will briefly review the theoretical foundations of SRL and discuss the value of using LTA to investigate this multidimensional concept. This study is based on college students (n = 75) learning about the human circulatory system while using MetaTutor, an intelligent tutoring system that adaptively supports SRL and targets specific metacognitive SRL processes including judgment of learning (JOL) and content evaluation (CE). Preliminary results identify transitional probabilities of SRL profiles from four distinct events associated with the use of SRL.
{"title":"Transitioning self-regulated learning profiles in hypermedia-learning environments","authors":"Clarissa Lau, Jeanne Sinclair, M. Taub, R. Azevedo, E. Jang","doi":"10.1145/3027385.3027443","DOIUrl":"https://doi.org/10.1145/3027385.3027443","url":null,"abstract":"Self-regulated learning (SRL) is a process that highly fluctuates as students actively deploy their metacognitive and cognitive processes during learning. In this paper, we apply an extension of latent profiling, latent transition analysis (LTA), which investigates the longitudinal development of students' SRL latent class memberships over time. We will briefly review the theoretical foundations of SRL and discuss the value of using LTA to investigate this multidimensional concept. This study is based on college students (n = 75) learning about the human circulatory system while using MetaTutor, an intelligent tutoring system that adaptively supports SRL and targets specific metacognitive SRL processes including judgment of learning (JOL) and content evaluation (CE). Preliminary results identify transitional probabilities of SRL profiles from four distinct events associated with the use of SRL.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126009897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tak-Lam Wong, Haoran Xie, Fu Lee Wang, C. Poon, D. Zou
We have developed a method called skill2vec, which applies big data techniques to automatically analyze the learning data to discover skill relationship, leading to a more objective and data-informed decision making. Skill2vec is a neural network architecture which can transform a skill to a new vector space called embedding. The embedding can facilitate the comparison and visualization of different skills and their relationship. We conducted a pilot experiment using benchmark dataset to demonstrate the effectiveness of our method.
{"title":"An automatic approach for discovering skill relationship from learning data","authors":"Tak-Lam Wong, Haoran Xie, Fu Lee Wang, C. Poon, D. Zou","doi":"10.1145/3027385.3029485","DOIUrl":"https://doi.org/10.1145/3027385.3029485","url":null,"abstract":"We have developed a method called skill2vec, which applies big data techniques to automatically analyze the learning data to discover skill relationship, leading to a more objective and data-informed decision making. Skill2vec is a neural network architecture which can transform a skill to a new vector space called embedding. The embedding can facilitate the comparison and visualization of different skills and their relationship. We conducted a pilot experiment using benchmark dataset to demonstrate the effectiveness of our method.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126167697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Huptych, Michal Bohuslavek, Martin Hlosta, Z. Zdráhal
This paper introduces two measures for the recommendation of study materials based on students' past study activity. We use records from the Virtual Learning Environment (VLE) and analyse the activity of previous students. We assume that the activity of past students represents patterns, which can be used as a basis for recommendations to current students. The measures we define are Relevance, for description of a supposed VLE activity derived from previous students of the course, and Effort, that represents the actual effort of individual current students. Based on these measures, we propose a composite measure, which we call Importance. We use data from the previous course presentations to evaluate of the consistency of students' behaviour. We use correlation of the defined measures Relevance and Average Effort to evaluate the behaviour of two different student cohorts and the Root Mean Square Error to measure the deviation of Average Effort and individual student Effort.
{"title":"Measures for recommendations based on past students' activity","authors":"M. Huptych, Michal Bohuslavek, Martin Hlosta, Z. Zdráhal","doi":"10.1145/3027385.3027426","DOIUrl":"https://doi.org/10.1145/3027385.3027426","url":null,"abstract":"This paper introduces two measures for the recommendation of study materials based on students' past study activity. We use records from the Virtual Learning Environment (VLE) and analyse the activity of previous students. We assume that the activity of past students represents patterns, which can be used as a basis for recommendations to current students. The measures we define are Relevance, for description of a supposed VLE activity derived from previous students of the course, and Effort, that represents the actual effort of individual current students. Based on these measures, we propose a composite measure, which we call Importance. We use data from the previous course presentations to evaluate of the consistency of students' behaviour. We use correlation of the defined measures Relevance and Average Effort to evaluate the behaviour of two different student cohorts and the Root Mean Square Error to measure the deviation of Average Effort and individual student Effort.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130628936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Spikol, L. Prieto, M. Rodríguez-Triana, M. Worsley, X. Ochoa, M. Cukurova
Multimodal Learning Analytics (MMLA) captures, integrates and analyzes learning traces from different sources in order to obtain a more holistic understanding of the learning process, wherever it happens. MMLA leverages the increasingly widespread availability of diverse sensors, high-frequency data collection technologies and sophisticated machine learning and artificial intelligence techniques. The aim of this workshop is twofold: first, to expose participants to, and develop, different multimodal datasets that reflect how MMLA can bring new insights and opportunities to investigate complex learning processes and environments; second, to collaboratively identify a set of grand challenges for further MMLA research, built upon the foundations of previous workshops on the topic.
{"title":"Current and future multimodal learning analytics data challenges","authors":"Daniel Spikol, L. Prieto, M. Rodríguez-Triana, M. Worsley, X. Ochoa, M. Cukurova","doi":"10.1145/3027385.3029437","DOIUrl":"https://doi.org/10.1145/3027385.3029437","url":null,"abstract":"Multimodal Learning Analytics (MMLA) captures, integrates and analyzes learning traces from different sources in order to obtain a more holistic understanding of the learning process, wherever it happens. MMLA leverages the increasingly widespread availability of diverse sensors, high-frequency data collection technologies and sophisticated machine learning and artificial intelligence techniques. The aim of this workshop is twofold: first, to expose participants to, and develop, different multimodal datasets that reflect how MMLA can bring new insights and opportunities to investigate complex learning processes and environments; second, to collaboratively identify a set of grand challenges for further MMLA research, built upon the foundations of previous workshops on the topic.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130843063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Catherine A. Spann, James D Schaeffer, George Siemens
The ability to pay attention and self-regulate is a fundamental skill required of learners of all ages. Learning analytics researchers have to date relied on data generated by a computing system (such as a learning management system, click stream or log data) to examine learners' self-regulatory abilities. The development of wearable computing through fitness trackers, watches, heart rate monitors, and clinical grade devices such as Empatica's E4 wristband now provides researchers with access to biometric data as students interact with learning content or software systems. This level of data collection promises to provide valuable insight into cognitive and affective experiences of individuals, especially when combined with traditional learning analytics data sources. Our study details the use of wearable technologies to assess the relationship between heart rate variability and the self-regulatory abilities of an individual. This is relevant for the field of learning analytics as methods become more complex and the assessment of learner performance becomes more nuanced and attentive to the affective factors that contribute to learner success.
{"title":"Expanding the scope of learning analytics data: preliminary findings on attention and self-regulation using wearable technology","authors":"Catherine A. Spann, James D Schaeffer, George Siemens","doi":"10.1145/3027385.3027427","DOIUrl":"https://doi.org/10.1145/3027385.3027427","url":null,"abstract":"The ability to pay attention and self-regulate is a fundamental skill required of learners of all ages. Learning analytics researchers have to date relied on data generated by a computing system (such as a learning management system, click stream or log data) to examine learners' self-regulatory abilities. The development of wearable computing through fitness trackers, watches, heart rate monitors, and clinical grade devices such as Empatica's E4 wristband now provides researchers with access to biometric data as students interact with learning content or software systems. This level of data collection promises to provide valuable insight into cognitive and affective experiences of individuals, especially when combined with traditional learning analytics data sources. Our study details the use of wearable technologies to assess the relationship between heart rate variability and the self-regulatory abilities of an individual. This is relevant for the field of learning analytics as methods become more complex and the assessment of learner performance becomes more nuanced and attentive to the affective factors that contribute to learner success.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126518801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual reality presents exciting new prospects for the delivery of educational materials to students. By combining this technology with biological sensors, a student in a virtual educational environment can be monitored for physiological markers of engagement or more cognitive states of learning. With this information, the virtual reality environment can be adaptively altered to reflect the student's state, essentially creating a closed-loop feedback system. This paper explores these concepts, and presents preliminary data on a combined EEG-VR working memory experiment as a first step toward a broader implementation of an intelligent adaptive learning system. This first-pass neural time-series and oscillatory data suggest that while an EEG-based neurofeedback system is feasible, more work on removing artifacts and identifying relevant and important features will lead to higher prediction accuracy.
{"title":"Enhancing learning through virtual reality and neurofeedback: a first step","authors":"Ryan J. Hubbard, Aldis Sipolins, Lin Zhou","doi":"10.1145/3027385.3027390","DOIUrl":"https://doi.org/10.1145/3027385.3027390","url":null,"abstract":"Virtual reality presents exciting new prospects for the delivery of educational materials to students. By combining this technology with biological sensors, a student in a virtual educational environment can be monitored for physiological markers of engagement or more cognitive states of learning. With this information, the virtual reality environment can be adaptively altered to reflect the student's state, essentially creating a closed-loop feedback system. This paper explores these concepts, and presents preliminary data on a combined EEG-VR working memory experiment as a first step toward a broader implementation of an intelligent adaptive learning system. This first-pass neural time-series and oscillatory data suggest that while an EEG-based neurofeedback system is feasible, more work on removing artifacts and identifying relevant and important features will lead to higher prediction accuracy.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121450899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the past five years, ethics and privacy around student data have become major topics of conversation in the learning analytics field. However, the majority of these have been theoretical in nature. The authors of this paper posit that more direct student engagement needs to be undertaken, and initial data from institutions beginning this process is shared. We find that, while the majority of respondents are accepting of the use of their data by their institutions, approval varies depending on the proposed purpose of the analytics. There also appear to be notable variations between students enrolled at United Kingdom and American institutions.
{"title":"Student perceptions of their privacy in leaning analytics applications","authors":"Kimberly E. Arnold, Niall Sclater","doi":"10.1145/3027385.3027392","DOIUrl":"https://doi.org/10.1145/3027385.3027392","url":null,"abstract":"Over the past five years, ethics and privacy around student data have become major topics of conversation in the learning analytics field. However, the majority of these have been theoretical in nature. The authors of this paper posit that more direct student engagement needs to be undertaken, and initial data from institutions beginning this process is shared. We find that, while the majority of respondents are accepting of the use of their data by their institutions, approval varies depending on the proposed purpose of the analytics. There also appear to be notable variations between students enrolled at United Kingdom and American institutions.","PeriodicalId":160897,"journal":{"name":"Proceedings of the Seventh International Learning Analytics & Knowledge Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122519188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}