S. S. S. Peri, Bodong Chen, A. Dougall, George Siemens
How ideas develop and evolve is a topic of interest for educators. By understanding this process, designers and educators are better able to support and guide collaborative learning activities. This paper presents an application of our Lifespan of an Idea framework to measure engagement patterns among individuals in communal socio-technical spaces like Twitter. We correlated engagement with social participation, enabling the process of idea expression, spread, and evolution. Social participation leads to transmission of ideas from one individual to another and can be gauged in the same way as evaluating diseases. The temporal dynamics of the social participation can be modeled through the lens of epidemiological modeling. To test the plausibility of this framework, we investigated social participation on Twitter using the tweet posting patterns of individuals in three academic conferences and one long term chat space. We used a basic SIR epidemiological model, where the rate parameters were estimated through Euler's solutions to SIR model and non-linear least squares optimization technique. We discuss the differences in the social participation among individuals in these spaces based on their transition behavior into different categories of the SIR model. We also made inferences on how the total lifetime of these different twitter spaces affects the engagement among individuals. We conclude by discussing implications of this study and planned future research of refining the Lifespan of an Idea Framework.
{"title":"Towards understanding the lifespan and spread of ideas: epidemiological modeling of participation on Twitter","authors":"S. S. S. Peri, Bodong Chen, A. Dougall, George Siemens","doi":"10.1145/3375462.3375515","DOIUrl":"https://doi.org/10.1145/3375462.3375515","url":null,"abstract":"How ideas develop and evolve is a topic of interest for educators. By understanding this process, designers and educators are better able to support and guide collaborative learning activities. This paper presents an application of our Lifespan of an Idea framework to measure engagement patterns among individuals in communal socio-technical spaces like Twitter. We correlated engagement with social participation, enabling the process of idea expression, spread, and evolution. Social participation leads to transmission of ideas from one individual to another and can be gauged in the same way as evaluating diseases. The temporal dynamics of the social participation can be modeled through the lens of epidemiological modeling. To test the plausibility of this framework, we investigated social participation on Twitter using the tweet posting patterns of individuals in three academic conferences and one long term chat space. We used a basic SIR epidemiological model, where the rate parameters were estimated through Euler's solutions to SIR model and non-linear least squares optimization technique. We discuss the differences in the social participation among individuals in these spaces based on their transition behavior into different categories of the SIR model. We also made inferences on how the total lifetime of these different twitter spaces affects the engagement among individuals. We conclude by discussing implications of this study and planned future research of refining the Lifespan of an Idea Framework.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115979722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inductive reasoning is an important educational practice but can be difficult for teachers to support in the classroom due to the high level of preparation and classroom time needed to choose the teaching materials that challenge students' current views. Intelligent tutoring systems can potentially facilitate this work for teachers by supporting the automatic adaptation of examples based on a student model of the induction process. However, current models of inductive reasoning usually lack two main characteristics helpful to adaptive learning environments, individual differences of students and tracing of students' learning as they receive feedback. In this paper, we describe a model to predict and simulate inductive reasoning of students for a categorization task. Our approach uses a Bayesian model for describing the reasoning processes of students. This model allows us to predict students' choices in categorization questions by accounting for their feature biases. Using data gathered from 222 students categorizing three topics, we find that our model has a 75% accuracy, which is 10% greater than a baseline model. Our model is a contribution to learning analytics by enabling us to assign different bias profiles to individual students and tracking these profile changes over time through which we can gain a better understanding of students' learning processes. This model may be relevant for systematically analysing students' differences and evolution in inductive reasoning strategies while supporting the design of adaptive inductive learning environments.
{"title":"A bayesian model of individual differences and flexibility in inductive reasoning for categorization of examples","authors":"Louis Faucon, Jennifer K. Olsen, P. Dillenbourg","doi":"10.1145/3375462.3375512","DOIUrl":"https://doi.org/10.1145/3375462.3375512","url":null,"abstract":"Inductive reasoning is an important educational practice but can be difficult for teachers to support in the classroom due to the high level of preparation and classroom time needed to choose the teaching materials that challenge students' current views. Intelligent tutoring systems can potentially facilitate this work for teachers by supporting the automatic adaptation of examples based on a student model of the induction process. However, current models of inductive reasoning usually lack two main characteristics helpful to adaptive learning environments, individual differences of students and tracing of students' learning as they receive feedback. In this paper, we describe a model to predict and simulate inductive reasoning of students for a categorization task. Our approach uses a Bayesian model for describing the reasoning processes of students. This model allows us to predict students' choices in categorization questions by accounting for their feature biases. Using data gathered from 222 students categorizing three topics, we find that our model has a 75% accuracy, which is 10% greater than a baseline model. Our model is a contribution to learning analytics by enabling us to assign different bias profiles to individual students and tracking these profile changes over time through which we can gain a better understanding of students' learning processes. This model may be relevant for systematically analysing students' differences and evolution in inductive reasoning strategies while supporting the design of adaptive inductive learning environments.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132752525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Time-on-task estimation, measured as the duration between two consecutive clicks using student log-files data, has been one of the most frequently used metrics in learning analytics research. However, the process of handling outliers (i.e., excessively long durations) in time-on-task estimation is under-explored and often not explicitly reported in many studies. One common approach to handle outliers in time-to-task estimation is to 'trim' all durations using a cut-off threshold, such as 60 or 30 minutes. This paper challenges this existing approach by demonstrating that the treatment of outliers in an educational context should be individual-specific, time-specific, and task-specific. In other words, what can be considered as outliers in time-on-task depends on the learning pattern of each student, the stages during the learning process, and the nature of the task involved. The analysis showed that predictive models using time-on-task estimation accounting for individual, time, and task differences could explain 3--4% more variances in academic performance than models using an outlier trimming approach. As an implication, this study provides a theoretically grounded and replicable outlier detection approach for future learning analytics research when using time-on-task estimation.
{"title":"Rethinking time-on-task estimation with outlier detection accounting for individual, time, and task differences","authors":"Quan Nguyen","doi":"10.1145/3375462.3375538","DOIUrl":"https://doi.org/10.1145/3375462.3375538","url":null,"abstract":"Time-on-task estimation, measured as the duration between two consecutive clicks using student log-files data, has been one of the most frequently used metrics in learning analytics research. However, the process of handling outliers (i.e., excessively long durations) in time-on-task estimation is under-explored and often not explicitly reported in many studies. One common approach to handle outliers in time-to-task estimation is to 'trim' all durations using a cut-off threshold, such as 60 or 30 minutes. This paper challenges this existing approach by demonstrating that the treatment of outliers in an educational context should be individual-specific, time-specific, and task-specific. In other words, what can be considered as outliers in time-on-task depends on the learning pattern of each student, the stages during the learning process, and the nature of the task involved. The analysis showed that predictive models using time-on-task estimation accounting for individual, time, and task differences could explain 3--4% more variances in academic performance than models using an outlier trimming approach. As an implication, this study provides a theoretically grounded and replicable outlier detection approach for future learning analytics research when using time-on-task estimation.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"436 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122802740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Solmaz Abdi, Hassan Khosravi, S. Sadiq, D. Gašević
Educational recommender systems (ERSs) aim to adaptively recommend a broad range of personalised resources and activities to students that will most meet their learning needs. Commonly, ERSs operate as a "black box" and give students no insight into the rationale of their choice. Recent contributions from the learning analytics and educational data mining communities have emphasised the importance of transparent, understandable and open learner models (OLMs) that provide insight and enhance learners' understanding of interactions with learning environments. In this paper, we aim to investigate the impact of complementing ERSs with transparent and understandable OLMs that provide justification for their recommendations. We conduct a randomised control trial experiment using an ERS with two interfaces ("Non-Complemented Interface" and "Complemented Interface") to determine the effect of our approach on student engagement and their perception of the effectiveness of the ERS. Overall, our results suggest that complementing an ERS with an OLM can have a positive effect on student engagement and their perception about the effectiveness of the system despite potentially making the system harder to navigate. In some cases, complementing an ERS with an OLM has the negative consequence of decreasing engagement, understandability and sense of fairness.
{"title":"Complementing educational recommender systems with open learner models","authors":"Solmaz Abdi, Hassan Khosravi, S. Sadiq, D. Gašević","doi":"10.1145/3375462.3375520","DOIUrl":"https://doi.org/10.1145/3375462.3375520","url":null,"abstract":"Educational recommender systems (ERSs) aim to adaptively recommend a broad range of personalised resources and activities to students that will most meet their learning needs. Commonly, ERSs operate as a \"black box\" and give students no insight into the rationale of their choice. Recent contributions from the learning analytics and educational data mining communities have emphasised the importance of transparent, understandable and open learner models (OLMs) that provide insight and enhance learners' understanding of interactions with learning environments. In this paper, we aim to investigate the impact of complementing ERSs with transparent and understandable OLMs that provide justification for their recommendations. We conduct a randomised control trial experiment using an ERS with two interfaces (\"Non-Complemented Interface\" and \"Complemented Interface\") to determine the effect of our approach on student engagement and their perception of the effectiveness of the ERS. Overall, our results suggest that complementing an ERS with an OLM can have a positive effect on student engagement and their perception about the effectiveness of the system despite potentially making the system harder to navigate. In some cases, complementing an ERS with an OLM has the negative consequence of decreasing engagement, understandability and sense of fairness.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117242010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Sharma, Z. Papamitsiou, Jennifer K. Olsen, M. Giannakos
Many factors influence learners' performance on an activity beyond the knowledge required. Learners' on-task effort has been acknowledged for strongly relating to their educational outcomes, reflecting how actively they are engaged in that activity. However, effort is not directly observable. Multimodal data can provide additional insights into the learning processes and may allow for effort estimation. This paper presents an approach for the classification of effort in an adaptive assessment context. Specifically, the behaviour of 32 students was captured during an adaptive self-assessment activity, using logs and physiological data (i.e., eye-tracking, EEG, wristband and facial expressions). We applied k-means to the multimodal data to cluster students' behavioural patterns. Next, we predicted students' effort to complete the upcoming task, based on the discovered behavioural patterns using a combination of Hidden Markov Models (HMMs) and the Viterbi algorithm. We also compared the results with other state-of-the-art classification algorithms (SVM, Random Forest). Our findings provide evidence that HMMs can encode the relationship between effort and behaviour (captured by the multimodal data) in a more efficient way than the other methods. Foremost, a practical implication of the approach is that the derived HMMs also pinpoint the moments to provide preventive/prescriptive feedback to the learners in real-time, by building-upon the relationship between behavioural patterns and the effort the learners are putting in.
许多因素会影响学习者在活动中的表现,而不仅仅是所需要的知识。学习者在任务中的努力被认为与他们的教育成果密切相关,反映了他们在活动中的积极程度。然而,努力是不能直接观察到的。多模态数据可以为学习过程提供额外的见解,并可能允许工作量估计。本文提出了一种在适应性评估环境下对工作进行分类的方法。具体来说,在适应性自我评估活动中,32名学生的行为被记录下来,使用日志和生理数据(即眼球追踪、脑电图、腕带和面部表情)。我们对多模态数据应用k-means对学生的行为模式进行聚类。接下来,我们结合使用隐马尔可夫模型(hmm)和维特比算法,根据发现的行为模式预测学生完成即将到来的任务的努力程度。我们还将结果与其他最先进的分类算法(SVM, Random Forest)进行了比较。我们的研究结果提供了证据,证明hmm可以比其他方法更有效地编码努力和行为之间的关系(由多模态数据捕获)。最重要的是,该方法的一个实际含义是,通过建立行为模式和学习者投入的努力之间的关系,衍生的hmm还可以精确地指出实时向学习者提供预防性/规定性反馈的时刻。
{"title":"Predicting learners' effortful behaviour in adaptive assessment using multimodal data","authors":"K. Sharma, Z. Papamitsiou, Jennifer K. Olsen, M. Giannakos","doi":"10.1145/3375462.3375498","DOIUrl":"https://doi.org/10.1145/3375462.3375498","url":null,"abstract":"Many factors influence learners' performance on an activity beyond the knowledge required. Learners' on-task effort has been acknowledged for strongly relating to their educational outcomes, reflecting how actively they are engaged in that activity. However, effort is not directly observable. Multimodal data can provide additional insights into the learning processes and may allow for effort estimation. This paper presents an approach for the classification of effort in an adaptive assessment context. Specifically, the behaviour of 32 students was captured during an adaptive self-assessment activity, using logs and physiological data (i.e., eye-tracking, EEG, wristband and facial expressions). We applied k-means to the multimodal data to cluster students' behavioural patterns. Next, we predicted students' effort to complete the upcoming task, based on the discovered behavioural patterns using a combination of Hidden Markov Models (HMMs) and the Viterbi algorithm. We also compared the results with other state-of-the-art classification algorithms (SVM, Random Forest). Our findings provide evidence that HMMs can encode the relationship between effort and behaviour (captured by the multimodal data) in a more efficient way than the other methods. Foremost, a practical implication of the approach is that the derived HMMs also pinpoint the moments to provide preventive/prescriptive feedback to the learners in real-time, by building-upon the relationship between behavioural patterns and the effort the learners are putting in.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114288393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the recognition of the need to include practitioners in the design of learning analytics (LA), especially teacher input tends to come later in the design process rather than in the definition of the initial design agenda. This paper presents a case study of a design project tasked with developing LA tools for a reading game for primary school children. Taking a co-design approach, we use the Inspiration Cards Workshop to promote meaningful teacher involvement even for participants with low background in data literacy or experience in using learning analytics. We discuss opportunities and limitations of using the Inspiration Cards Workshops methodology, and particularly Inspiration Cards as a design tool, to inform future LA design efforts.
{"title":"Inspiration cards workshops with primary teachers in the early co-design stages of learning analytics","authors":"Yvonne Vezzoli, M. Mavrikis, A. Vasalou","doi":"10.1145/3375462.3375537","DOIUrl":"https://doi.org/10.1145/3375462.3375537","url":null,"abstract":"Despite the recognition of the need to include practitioners in the design of learning analytics (LA), especially teacher input tends to come later in the design process rather than in the definition of the initial design agenda. This paper presents a case study of a design project tasked with developing LA tools for a reading game for primary school children. Taking a co-design approach, we use the Inspiration Cards Workshop to promote meaningful teacher involvement even for participants with low background in data literacy or experience in using learning analytics. We discuss opportunities and limitations of using the Inspiration Cards Workshops methodology, and particularly Inspiration Cards as a design tool, to inform future LA design efforts.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114583859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lu Ou, Alejandro Andrade, R. Alberto, Gitte van Helden, A. Bakker
Embodied learning and the design of embodied learning platforms have gained popularity in recent years due to the increasing availability of sensing technologies. In our study, we made use of the Mathematical Imagery Trainer for Proportion (MIT-P) that uses a touchscreen tablet to help students explore the concept of mathematical proportion. The use of sensing technologies provides an unprecedented amount of high-frequency data on students' behaviors. We investigated a statistical model called mixture Regime-Switching Hidden Logistic Transition Process (mixRHLP) and fit it to the students' hand motion data. Simultaneously, the model finds characteristic regimes and assigns students to clusters of regime transitions. To understand the nature of these regimes and clusters, we explore some properties in students' and tutor's verbalization associated with these different phases.
{"title":"Using a cluster-based regime-switching dynamic model to understand embodied mathematical learning","authors":"Lu Ou, Alejandro Andrade, R. Alberto, Gitte van Helden, A. Bakker","doi":"10.1145/3375462.3375513","DOIUrl":"https://doi.org/10.1145/3375462.3375513","url":null,"abstract":"Embodied learning and the design of embodied learning platforms have gained popularity in recent years due to the increasing availability of sensing technologies. In our study, we made use of the Mathematical Imagery Trainer for Proportion (MIT-P) that uses a touchscreen tablet to help students explore the concept of mathematical proportion. The use of sensing technologies provides an unprecedented amount of high-frequency data on students' behaviors. We investigated a statistical model called mixture Regime-Switching Hidden Logistic Transition Process (mixRHLP) and fit it to the students' hand motion data. Simultaneously, the model finds characteristic regimes and assigns students to clusters of regime transitions. To understand the nature of these regimes and clusters, we explore some properties in students' and tutor's verbalization associated with these different phases.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125829530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the context of online programming homework for a university course, we explore the extent to which learners engage with optional prompts to self -explain answers they choose for problems. Such prompts are known to benefit learning in laboratory and classroom settings [4], but there are less data about the extent to which students engage with them when they are optional additions to online homework. We report data from a deployment of self-explanation prompts in online programming homework, providing insight into how the frequency of writing explanations is correlated with different variables, such as how early students start homework, whether they got a problem correct, and how proficient they are in the language of instruction. We also report suggestive results from a randomized experiment comparing several methods for increasing the rate at which people write explanations, such as including more than one kind of prompt. These findings provide insight into promising dimensions to explore in understanding how real students may engage with prompts to explain answers.
{"title":"Characterizing and influencing students' tendency to write self-explanations in online homework","authors":"Yuya Asano, Jaemarie Solyst, J. Williams","doi":"10.1145/3375462.3375511","DOIUrl":"https://doi.org/10.1145/3375462.3375511","url":null,"abstract":"In the context of online programming homework for a university course, we explore the extent to which learners engage with optional prompts to self -explain answers they choose for problems. Such prompts are known to benefit learning in laboratory and classroom settings [4], but there are less data about the extent to which students engage with them when they are optional additions to online homework. We report data from a deployment of self-explanation prompts in online programming homework, providing insight into how the frequency of writing explanations is correlated with different variables, such as how early students start homework, whether they got a problem correct, and how proficient they are in the language of instruction. We also report suggestive results from a randomized experiment comparing several methods for increasing the rate at which people write explanations, such as including more than one kind of prompt. These findings provide insight into promising dimensions to explore in understanding how real students may engage with prompts to explain answers.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129886336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Verbert, X. Ochoa, Robin De Croon, Raphael A. Dourado, T. Laet
Learning analytics dashboards are at the core of the LAK vision to involve the human into the decision-making process. The key focus of these dashboards is to support better human sense-making and decision-making by visualising data about learners to a variety of stakeholders. Early research on learning analytics dashboards focused on the use of visualisation and prediction techniques and demonstrates the rich potential of dashboards in a variety of learning settings. Present research increasingly uses participatory design methods to tailor dashboards to the needs of stakeholders, employs multimodal data acquisition techniques, and starts to research theoretical underpinnings of dashboards. In this paper, we present these past and present research efforts as well as the results of the VISLA19 workshop on "Visual approaches to Learning Analytics" that was held at LAK19 with experts in the domain to identify and articulate common practices and challenges for the domain. Based on an analysis of the results, we present a research agenda to help shape the future of learning analytics dashboards.
{"title":"Learning analytics dashboards: the past, the present and the future","authors":"K. Verbert, X. Ochoa, Robin De Croon, Raphael A. Dourado, T. Laet","doi":"10.1145/3375462.3375504","DOIUrl":"https://doi.org/10.1145/3375462.3375504","url":null,"abstract":"Learning analytics dashboards are at the core of the LAK vision to involve the human into the decision-making process. The key focus of these dashboards is to support better human sense-making and decision-making by visualising data about learners to a variety of stakeholders. Early research on learning analytics dashboards focused on the use of visualisation and prediction techniques and demonstrates the rich potential of dashboards in a variety of learning settings. Present research increasingly uses participatory design methods to tailor dashboards to the needs of stakeholders, employs multimodal data acquisition techniques, and starts to research theoretical underpinnings of dashboards. In this paper, we present these past and present research efforts as well as the results of the VISLA19 workshop on \"Visual approaches to Learning Analytics\" that was held at LAK19 with experts in the domain to identify and articulate common practices and challenges for the domain. Based on an analysis of the results, we present a research agenda to help shape the future of learning analytics dashboards.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128861974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hassan Khosravi, George Gyamii, Barbara E. Hanna, J. Lodge
The value of students developing the capacity to make accurate judgements about the quality of their work and that of others has been widely recognised in higher education literature. However, despite this recognition, little attention has been paid to the development of tools and strategies with the potential both to foster evaluative judgement and to support empirical research into its growth. This paper provides a demonstration of how educational technologies may be used to fill this gap. In particular, we introduce the adaptive learning system RiPPLE and describe how it aims to (1) develop evaluative judgement in large-class settings through suggested strategies from the literature such as the use of rubrics, exemplars and peer review and (2) enable large empirical studies at low cost to determine the effect-size of such strategies. A case study demonstrating how RiPPLE has been used to achieve these goals in a specific context is presented.
{"title":"Fostering and supporting empirical research on evaluative judgement via a crowdsourced adaptive learning system","authors":"Hassan Khosravi, George Gyamii, Barbara E. Hanna, J. Lodge","doi":"10.1145/3375462.3375532","DOIUrl":"https://doi.org/10.1145/3375462.3375532","url":null,"abstract":"The value of students developing the capacity to make accurate judgements about the quality of their work and that of others has been widely recognised in higher education literature. However, despite this recognition, little attention has been paid to the development of tools and strategies with the potential both to foster evaluative judgement and to support empirical research into its growth. This paper provides a demonstration of how educational technologies may be used to fill this gap. In particular, we introduce the adaptive learning system RiPPLE and describe how it aims to (1) develop evaluative judgement in large-class settings through suggested strategies from the literature such as the use of rubrics, exemplars and peer review and (2) enable large empirical studies at low cost to determine the effect-size of such strategies. A case study demonstrating how RiPPLE has been used to achieve these goals in a specific context is presented.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134583464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}