Barbara Wasson, Michail Giannakos, Marte Blikstad-Balas, P. H. Uppstad, Malcom Langford, E. Bøhn
In June 2022, the Norwegian Expert Commission on Learning Analytics delivered an interim report to the Norwegian Minister of Education and Research. Motivated by the need to establish a solid foundation upon which to regulate and promote the use of learning analytics in the Norwegian educational sector, the Ministry asked the Expert Commission to investigate the relevant pedagogical, ethical, legal, and privacy issues. Addressing primary, secondary, higher, and vocational education, the interim report surveys the field of learning analytics and the regulatory environment across the contexts and analyzes its challenges and opportunities for Norwegian education. Four dilemmas — data, learning, governance, and competence — signal where greater knowledge, awareness, and reflection are needed, as well as the nature of necessary policy and regulatory choices. In this practical report, we offer insights on the use, development, and regulation of LA in different countries, describe the Expert Commission mandate, work method, and dilemmas, and conclude with a reflection on the relationship between research on learning analytics and the challenges that arise when implementing learning analytics in practice. This practical report is relevant for those interested in developing policies or practices surrounding the use of learning analytics at the local or national level.
{"title":"Implementing Learning Analytics in Norway","authors":"Barbara Wasson, Michail Giannakos, Marte Blikstad-Balas, P. H. Uppstad, Malcom Langford, E. Bøhn","doi":"10.18608/jla.2024.8241","DOIUrl":"https://doi.org/10.18608/jla.2024.8241","url":null,"abstract":"In June 2022, the Norwegian Expert Commission on Learning Analytics delivered an interim report to the Norwegian Minister of Education and Research. Motivated by the need to establish a solid foundation upon which to regulate and promote the use of learning analytics in the Norwegian educational sector, the Ministry asked the Expert Commission to investigate the relevant pedagogical, ethical, legal, and privacy issues. Addressing primary, secondary, higher, and vocational education, the interim report surveys the field of learning analytics and the regulatory environment across the contexts and analyzes its challenges and opportunities for Norwegian education. Four dilemmas — data, learning, governance, and competence — signal where greater knowledge, awareness, and reflection are needed, as well as the nature of necessary policy and regulatory choices. In this practical report, we offer insights on the use, development, and regulation of LA in different countries, describe the Expert Commission mandate, work method, and dilemmas, and conclude with a reflection on the relationship between research on learning analytics and the challenges that arise when implementing learning analytics in practice. This practical report is relevant for those interested in developing policies or practices surrounding the use of learning analytics at the local or national level.","PeriodicalId":506271,"journal":{"name":"Journal of Learning Analytics","volume":"89 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141802343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nina Seidenberg, I. Jivet, Maren Scheffel, V. Kovanović, Grace Lynch, Hendrik Drachsler
A relevant learning space for academics, especially junior researchers, is the academic conference. While conference participation has long been associated with personal attendance at the conference venue, virtual participation is becoming increasingly important. This study investigates the perceived value of a purely virtual academic conference for its participants by analyzing the evaluation feedback (N = 759) from three virtual and two face-to-face LAK conferences. For the purposes of this study, we derive a definition of conference value and identify factors contributing to the overall value rating of virtual academic conferences based on the existing literature. Results indicate a perceived value of virtual conferences comparable with that of face-to-face events, satisfaction with social interaction and topics of interest being the most important predictors. Our insights show that virtual conferences are valuable events for academic professional development and conference organizers can utilize these results to design a valuable event for their participants.
{"title":"Adaptive Interventions Reducing Social Identity Threat to Increase Equity in Higher Distance Education","authors":"Nina Seidenberg, I. Jivet, Maren Scheffel, V. Kovanović, Grace Lynch, Hendrik Drachsler","doi":"10.18608/jla.2024.8247","DOIUrl":"https://doi.org/10.18608/jla.2024.8247","url":null,"abstract":"A relevant learning space for academics, especially junior researchers, is the academic conference. While conference participation has long been associated with personal attendance at the conference venue, virtual participation is becoming increasingly important. This study investigates the perceived value of a purely virtual academic conference for its participants by analyzing the evaluation feedback (N = 759) from three virtual and two face-to-face LAK conferences. For the purposes of this study, we derive a definition of conference value and identify factors contributing to the overall value rating of virtual academic conferences based on the existing literature. Results indicate a perceived value of virtual conferences comparable with that of face-to-face events, satisfaction with social interaction and topics of interest being the most important predictors. Our insights show that virtual conferences are valuable events for academic professional development and conference organizers can utilize these results to design a valuable event for their participants.","PeriodicalId":506271,"journal":{"name":"Journal of Learning Analytics","volume":"48 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141805002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computational thinking (CT) is a concept of growing importance to pre-university education. Yet, CT is often assessed through results, rather than by looking at the CT process itself. Process-based assessments, or assessments that model how a student completed a task, could instead investigate the process of CT as a formative assessment. In this work, we proposed an approach for developing process-based assessments using constructionist tasks specifically for CT assessment in K–12 contexts, with a focus on directly connecting programming artifacts to aspects of CT. We then illustrated such an assessment with 29 students who ranged in CT and programming experience. These students completed both a constructionist task and a traditional CT assessment. Data from the constructionist task was used to build a process-based assessment and results were compared between the two assessment methods. The process-based assessment produced groups of students who differed in their approach to the task with varying levels of success. However, there was no difference between groups of students in the scores on the traditional CT assessment. Process-based assessment from our approach may be useful as formative assessment to give process feedback, localized to the task given to students.
{"title":"A Method for Developing Process-Based Assessments for Computational Thinking Tasks","authors":"S. Bhatt, K. Verbert, Wim Van Den Noortgate","doi":"10.18608/jla.2024.8291","DOIUrl":"https://doi.org/10.18608/jla.2024.8291","url":null,"abstract":"Computational thinking (CT) is a concept of growing importance to pre-university education. Yet, CT is often assessed through results, rather than by looking at the CT process itself. Process-based assessments, or assessments that model how a student completed a task, could instead investigate the process of CT as a formative assessment. In this work, we proposed an approach for developing process-based assessments using constructionist tasks specifically for CT assessment in K–12 contexts, with a focus on directly connecting programming artifacts to aspects of CT. We then illustrated such an assessment with 29 students who ranged in CT and programming experience. These students completed both a constructionist task and a traditional CT assessment. Data from the constructionist task was used to build a process-based assessment and results were compared between the two assessment methods. The process-based assessment produced groups of students who differed in their approach to the task with varying levels of success. However, there was no difference between groups of students in the scores on the traditional CT assessment. Process-based assessment from our approach may be useful as formative assessment to give process feedback, localized to the task given to students.","PeriodicalId":506271,"journal":{"name":"Journal of Learning Analytics","volume":"19 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141803157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Onur Karademir, Lena Borgards, Daniele Di Mitri, Sebastian Strauß, Marcus Kubsch, Markus Brobeil, Adrian Grimm, Sebastian Gombert, N. Rummel, Knut Neumann, Hendrik Drachsler
This paper presents a teacher dashboard intervention study in secondary school practice involving teachers (n = 16) with their classes (n = 22) and students (n = 403). A quasi-experimental treatment-control group design was implemented to compare student learning outcomes between classrooms where teachers did not have access to the dashboard and classrooms where teachers had access to the dashboard. We examined different points in the impact chain of the “LA Cockpit,” a teacher dashboard with a feedback system through which teachers can send feedback to their students on student learning. To investigate this impact chain from teacher use of dashboards to student learning, we analyzed 1) teachers’ perceived technology acceptance of the LA Cockpit, 2) teacher feedback practices using the LA Cockpit, and 3) student knowledge gains as measured by pre- and post-tests. The analysis of n = 355 feedback messages sent by teachers through the LA Cockpit revealed that the dashboard assists teachers in identifying students facing difficulties and that teachers mostly provided process feedback, which is known to be effective for student learning. For student learning, significantly higher knowledge gains were found in the teacher dashboard condition compared to the control condition.
本文介绍了在中学实践中开展的教师仪表板干预研究,涉及教师(n = 16)及其班级(n = 22)和学生(n = 403)。我们采用了准实验性治疗-对照组设计,以比较教师未使用仪表板的教室与教师可使用仪表板的教室之间的学生学习成果。我们研究了 "LA Cockpit "影响链中的不同点,"LA Cockpit "是一个带有反馈系统的教师仪表板,教师可以通过它向学生发送有关学生学习情况的反馈。为了研究从教师使用仪表板到学生学习的影响链,我们分析了 1) 教师对 "LA Cockpit "的技术接受程度;2) 教师使用 "LA Cockpit "进行反馈的实践;3) 通过前测和后测衡量的学生知识收获。对教师通过洛杉矶驾驶舱发送的 n = 355 条反馈信息进行的分析表明,仪表板有助于教师发现面临困难的学生,而且教师大多提供过程反馈,这对学生的学习是有效的。在学生学习方面,与对照组相比,教师仪表板条件下的知识收益明显更高。
{"title":"Following the Impact Chain of the LA Cockpit","authors":"Onur Karademir, Lena Borgards, Daniele Di Mitri, Sebastian Strauß, Marcus Kubsch, Markus Brobeil, Adrian Grimm, Sebastian Gombert, N. Rummel, Knut Neumann, Hendrik Drachsler","doi":"10.18608/jla.2024.8399","DOIUrl":"https://doi.org/10.18608/jla.2024.8399","url":null,"abstract":"This paper presents a teacher dashboard intervention study in secondary school practice involving teachers (n = 16) with their classes (n = 22) and students (n = 403). A quasi-experimental treatment-control group design was implemented to compare student learning outcomes between classrooms where teachers did not have access to the dashboard and classrooms where teachers had access to the dashboard. We examined different points in the impact chain of the “LA Cockpit,” a teacher dashboard with a feedback system through which teachers can send feedback to their students on student learning. To investigate this impact chain from teacher use of dashboards to student learning, we analyzed 1) teachers’ perceived technology acceptance of the LA Cockpit, 2) teacher feedback practices using the LA Cockpit, and 3) student knowledge gains as measured by pre- and post-tests. The analysis of n = 355 feedback messages sent by teachers through the LA Cockpit revealed that the dashboard assists teachers in identifying students facing difficulties and that teachers mostly provided process feedback, which is known to be effective for student learning. For student learning, significantly higher knowledge gains were found in the teacher dashboard condition compared to the control condition.","PeriodicalId":506271,"journal":{"name":"Journal of Learning Analytics","volume":"27 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141805947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The current study measures the extent to which students’ self-regulated learning tactics and learning outcomes change as the result of a deliberate, data-driven improvement in the learning design of mastery-based online learning modules. In the original design, students were required to attempt the assessment once before being allowed to access the learning material. The improved design gave students the choice to skip the first attempt and access the learning material directly. Student learning tactics were measured using a multi-level clustering and process mining algorithm, and a quasi-experiment design was implemented to remove or reduce differences in extraneous factors, including content being covered, time of implementation, and naturally occurring fluctuations in student learning tactics. The analysis suggests that most students who chose to skip the first attempt were effectively self-regulating their learning and were thus successful in learning from the instructional materials. Students who would have failed the first attempt were much more likely to skip it than those who would have passed the first attempt. The new design also resulted in a small improvement in learning outcome and median learning time. The study demonstrates the creation of a closed loop between learning design and learning analytics: first, using learning analytics to inform improvements to the learning design, then assessing the effectiveness and impact of the improvements.
{"title":"How Does a Data-Informed Deliberate Change in Learning Design Impact Students’ Self-Regulated Learning Tactics?","authors":"Zhongzhou Chen, Tom Zhang, M. Taub","doi":"10.18608/jla.2024.8083","DOIUrl":"https://doi.org/10.18608/jla.2024.8083","url":null,"abstract":"The current study measures the extent to which students’ self-regulated learning tactics and learning outcomes change as the result of a deliberate, data-driven improvement in the learning design of mastery-based online learning modules. In the original design, students were required to attempt the assessment once before being allowed to access the learning material. The improved design gave students the choice to skip the first attempt and access the learning material directly. Student learning tactics were measured using a multi-level clustering and process mining algorithm, and a quasi-experiment design was implemented to remove or reduce differences in extraneous factors, including content being covered, time of implementation, and naturally occurring fluctuations in student learning tactics. The analysis suggests that most students who chose to skip the first attempt were effectively self-regulating their learning and were thus successful in learning from the instructional materials. Students who would have failed the first attempt were much more likely to skip it than those who would have passed the first attempt. The new design also resulted in a small improvement in learning outcome and median learning time. The study demonstrates the creation of a closed loop between learning design and learning analytics: first, using learning analytics to inform improvements to the learning design, then assessing the effectiveness and impact of the improvements.","PeriodicalId":506271,"journal":{"name":"Journal of Learning Analytics","volume":"36 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141805996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joni Lämsä, Justin Edwards, Eetu Haataja, Márta Sobocinski, Paola R. Peña, Andy Nguyen, Sanna Järvelä
The theory of socially shared regulation of learning (SSRL) suggests that successful collaborative groups can identify and respond to trigger events stemming from cognitive or emotional obstacles in learning. Thus, to develop real-time support for SSRL, novel metrics are needed to identify different types of trigger events that invite SSRL. Our aim was to apply two metrics derived from different data streams to study how trigger events for SSRL shaped group linguistic alignment (based on audio data) and physiological synchrony (based on electrodermal activity data). The data came from six groups of students (N = 18) as they worked face-to-face on a collaborative learning task with one cognitive and two emotional trigger events. We found that the cognitive trigger event increased linguistic alignment in task-description words and led to physiological out-of-synchrony. The emotional trigger events decreased out-of-synchrony and increased high-arousal synchrony at the physiological level but did not affect linguistic alignment. Therefore, different metrics for studying markers and responses to different types of trigger events are needed, suggesting the necessity for multimodal learning analytics to support collaborative learning.
{"title":"Learners’ Linguistic Alignment and Physiological Synchrony","authors":"Joni Lämsä, Justin Edwards, Eetu Haataja, Márta Sobocinski, Paola R. Peña, Andy Nguyen, Sanna Järvelä","doi":"10.18608/jla.2024.8287","DOIUrl":"https://doi.org/10.18608/jla.2024.8287","url":null,"abstract":"The theory of socially shared regulation of learning (SSRL) suggests that successful collaborative groups can identify and respond to trigger events stemming from cognitive or emotional obstacles in learning. Thus, to develop real-time support for SSRL, novel metrics are needed to identify different types of trigger events that invite SSRL. Our aim was to apply two metrics derived from different data streams to study how trigger events for SSRL shaped group linguistic alignment (based on audio data) and physiological synchrony (based on electrodermal activity data). The data came from six groups of students (N = 18) as they worked face-to-face on a collaborative learning task with one cognitive and two emotional trigger events. We found that the cognitive trigger event increased linguistic alignment in task-description words and led to physiological out-of-synchrony. The emotional trigger events decreased out-of-synchrony and increased high-arousal synchrony at the physiological level but did not affect linguistic alignment. Therefore, different metrics for studying markers and responses to different types of trigger events are needed, suggesting the necessity for multimodal learning analytics to support collaborative learning.","PeriodicalId":506271,"journal":{"name":"Journal of Learning Analytics","volume":"3 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141803435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote technology has been widely incorporated into health professions education. For procedural skills training, effective feedback and reflection processes are required. Consequently, supporting a self-regulated learning (SRL) approach with learning analytics dashboards (LADs) has proven beneficial in online environments. Despite the potential of LADs, understanding their design to enhance SRL and provide useful feedback remains a significant challenge. Focusing on LAD design, implementation, and evaluation, the study followed a mixed-methods two-phase design-based research approach. The study used a triangulation methodology of qualitative interviews and SRL and sensemaking questionnaires to comprehensively understand the LAD’s effectiveness and student SRL and feedback uptake strategies during remote procedural skills training. Initial findings revealed the value students placed on performance visualization and peer comparison despite some challenges in LAD design and usability. The study also identified the prominent adoption of SRL strategies such as help-seeking, elaboration, and strategic planning. Sensemaking results showed the value of personalized performance metrics and planning resources in the LAD and recommendations to improve reflection and feedback uptake. Subsequent findings suggested that SRL levels significantly predicted the levels of sensemaking. The students valued the LAD as a tool for supporting feedback uptake and strategic planning, demonstrating the potential for enhancing procedural skills learning.
{"title":"Enhancing Feedback Uptake and Self-Regulated Learning in Procedural Skills Training","authors":"Ignacio Villagrán, Rocío Hernández, Gregory Schuit, Andrés Neyem, Javiera Fuentes, Loreto Larrondo, Elisa Margozzini, María T. Hurtado, Zoe Iriarte, Constanza Miranda, Julián Varas, Isabel Hilliger","doi":"10.18608/jla.2024.8195","DOIUrl":"https://doi.org/10.18608/jla.2024.8195","url":null,"abstract":"Remote technology has been widely incorporated into health professions education. For procedural skills training, effective feedback and reflection processes are required. Consequently, supporting a self-regulated learning (SRL) approach with learning analytics dashboards (LADs) has proven beneficial in online environments. Despite the potential of LADs, understanding their design to enhance SRL and provide useful feedback remains a significant challenge. Focusing on LAD design, implementation, and evaluation, the study followed a mixed-methods two-phase design-based research approach. The study used a triangulation methodology of qualitative interviews and SRL and sensemaking questionnaires to comprehensively understand the LAD’s effectiveness and student SRL and feedback uptake strategies during remote procedural skills training. Initial findings revealed the value students placed on performance visualization and peer comparison despite some challenges in LAD design and usability. The study also identified the prominent adoption of SRL strategies such as help-seeking, elaboration, and strategic planning. Sensemaking results showed the value of personalized performance metrics and planning resources in the LAD and recommendations to improve reflection and feedback uptake. Subsequent findings suggested that SRL levels significantly predicted the levels of sensemaking. The students valued the LAD as a tool for supporting feedback uptake and strategic planning, demonstrating the potential for enhancing procedural skills learning.","PeriodicalId":506271,"journal":{"name":"Journal of Learning Analytics","volume":"67 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141655625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Flora Jin, Bhagya Maheshi, Roberto Martínez-Maldonado, D. Gašević, Yi-Shan Tsai
Feedback is essential in learning. The emerging concept of feedback literacy underscores the skills students require for effective use of feedback. This highlights students’ responsibilities in the feedback process. Yet, there is currently a lack of mechanisms to understand how students make sense of feedback and whether they act on it. This gap makes it hard to effectively support students in feedback literacy development and improve the quality of feedback. As a specific application of learning analytics, feedback analytics (analytics on learner engagement with feedback) can offer insights into students’ learning engagement and progression, which can in turn be used to scaffold student feedback literacy. This study proposes a feedback analytics tool, designed with students, aimed at aiding students to synthesize feedback received from multiple sources, scaffold the sense-making process, and prompt deeper reflections or actions on feedback based on data about students’ interactions with feedback. We held focus group discussions with 38 students to learn about their feedback experiences and identified tool features. Based on identified user requirements, a prototype was developed and validated with 16 students via individual interviews. Based on the findings, we envision a feedback analytics tool with the aim of scaffolding student feedback literacy.
{"title":"Scaffolding Feedback Literacy: Designing a Feedback Analytics Tool with Students","authors":"Flora Jin, Bhagya Maheshi, Roberto Martínez-Maldonado, D. Gašević, Yi-Shan Tsai","doi":"10.18608/jla.2024.8339","DOIUrl":"https://doi.org/10.18608/jla.2024.8339","url":null,"abstract":"Feedback is essential in learning. The emerging concept of feedback literacy underscores the skills students require for effective use of feedback. This highlights students’ responsibilities in the feedback process. Yet, there is currently a lack of mechanisms to understand how students make sense of feedback and whether they act on it. This gap makes it hard to effectively support students in feedback literacy development and improve the quality of feedback. As a specific application of learning analytics, feedback analytics (analytics on learner engagement with feedback) can offer insights into students’ learning engagement and progression, which can in turn be used to scaffold student feedback literacy. This study proposes a feedback analytics tool, designed with students, aimed at aiding students to synthesize feedback received from multiple sources, scaffold the sense-making process, and prompt deeper reflections or actions on feedback based on data about students’ interactions with feedback. We held focus group discussions with 38 students to learn about their feedback experiences and identified tool features. Based on identified user requirements, a prototype was developed and validated with 16 students via individual interviews. Based on the findings, we envision a feedback analytics tool with the aim of scaffolding student feedback literacy. ","PeriodicalId":506271,"journal":{"name":"Journal of Learning Analytics","volume":"104 13","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141667049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Educational disparities between traditional and non-traditional student groups in higher distance education can potentially be reduced by alleviating social identity threat and strengthening students’ sense of belonging in the academic context. We present a use case of how Learning Analytics and Machine Learning can be applied to develop and implement an algorithm to classify students as at-risk of experiencing social identity threat. These students would be presented with an intervention fostering a sense of belonging. We systematically analyze the intervention’s intended positive consequences to reduce structural discrimination and increase educational equity, as well as potential risks based on privacy, data protection, and algorithmic fairness considerations. Finally, we provide recommendations for Higher Education Institutions to mitigate risk of bias and unintended consequences during algorithm development and implementation from an ethical perspective.
{"title":"Adaptive Interventions Reducing Social Identity Threat to Increase Equity in Higher Distance Education","authors":"Laura Froehlich, Sebastian Weydner-Volkmann","doi":"10.18608/jla.2023.8301","DOIUrl":"https://doi.org/10.18608/jla.2023.8301","url":null,"abstract":"Educational disparities between traditional and non-traditional student groups in higher distance education can potentially be reduced by alleviating social identity threat and strengthening students’ sense of belonging in the academic context. We present a use case of how Learning Analytics and Machine Learning can be applied to develop and implement an algorithm to classify students as at-risk of experiencing social identity threat. These students would be presented with an intervention fostering a sense of belonging. We systematically analyze the intervention’s intended positive consequences to reduce structural discrimination and increase educational equity, as well as potential risks based on privacy, data protection, and algorithmic fairness considerations. Finally, we provide recommendations for Higher Education Institutions to mitigate risk of bias and unintended consequences during algorithm development and implementation from an ethical perspective.","PeriodicalId":506271,"journal":{"name":"Journal of Learning Analytics","volume":" 38","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141679647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We report on a large-scale, log-based study of the associations between persistence and success in an online game-based learning environment for elementary school mathematics. While working with applets, learners can rerun a task after completing it or can halt before completing and rerun it again; both of these mechanisms may improve the score. We analyzed about 3.1 million applet runs by N=44,323 1st–6th-grade students to have a nuanced understanding of persistence patterns, by identifying sequences of consecutive single applet runs (SoCSARs). Overall, we analyzed 2,249,647 SoCSARs and identified six patterns, based on halting and rerunning tasks, and their completion: 1) Single Complete, 2) Single Incomplete, 3) Some Incomplete and Single Complete, 4) Multiple Incomplete and No Complete, 5) Multiple Complete and No Incomplete, and 6) Multiple Complete and Some Incomplete. Expectedly, we found a positive correlation between SoCSAR length and success. Some patterns demonstrate low to medium positive associations with success, while others demonstrate low to medium negative associations. Furthermore, the associations between the type of persistence and success vary by grade level. We discuss these complex relationships and suggest metacognitive and motivational factors that may explain why some patterns are productive and others are not.
{"title":"When Leaving is Persisting","authors":"Orly Klein-Latucha, Arnon Hershkovitz","doi":"10.18608/jla.2023.8219","DOIUrl":"https://doi.org/10.18608/jla.2023.8219","url":null,"abstract":"We report on a large-scale, log-based study of the associations between persistence and success in an online game-based learning environment for elementary school mathematics. While working with applets, learners can rerun a task after completing it or can halt before completing and rerun it again; both of these mechanisms may improve the score. We analyzed about 3.1 million applet runs by N=44,323 1st–6th-grade students to have a nuanced understanding of persistence patterns, by identifying sequences of consecutive single applet runs (SoCSARs). Overall, we analyzed 2,249,647 SoCSARs and identified six patterns, based on halting and rerunning tasks, and their completion: 1) Single Complete, 2) Single Incomplete, 3) Some Incomplete and Single Complete, 4) Multiple Incomplete and No Complete, 5) Multiple Complete and No Incomplete, and 6) Multiple Complete and Some Incomplete. Expectedly, we found a positive correlation between SoCSAR length and success. Some patterns demonstrate low to medium positive associations with success, while others demonstrate low to medium negative associations. Furthermore, the associations between the type of persistence and success vary by grade level. We discuss these complex relationships and suggest metacognitive and motivational factors that may explain why some patterns are productive and others are not.","PeriodicalId":506271,"journal":{"name":"Journal of Learning Analytics","volume":"20 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141109914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}