首页 > 最新文献

LAK23: 13th International Learning Analytics and Knowledge Conference最新文献

英文 中文
METS: Multimodal Learning Analytics of Embodied Teamwork Learning 具身团队合作学习的多模态学习分析
Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576076
Linxuan Zhao, Z. Swiecki, D. Gašević, Lixiang Yan, S. Dix, Hollie Jaggard, Rosie Wotherspoon, Abra Osborne, Xinyu Li, Riordan Alfredo, Roberto Martínez-Maldonado
Embodied team learning is a form of group learning that occurs in co-located settings where students need to interact with others while actively using resources in the physical learning space to achieve a common goal. In such situations, communication dynamics can be complex as team discourse segments can happen in parallel at different locations of the physical space with varied team member configurations. This can make it hard for teachers to assess the effectiveness of teamwork and for students to reflect on their own experiences. To address this problem, we propose METS (Multimodal Embodied Teamwork Signature), a method to model team dialogue content in combination with spatial and temporal data to generate a signature of embodied teamwork. We present a study in the context of a highly dynamic healthcare team simulation space where students can freely move. We illustrate how signatures of embodied teamwork can help to identify key differences between high and low performing teams: i) across the whole learning session; ii) at different phases of learning sessions; and iii) at particular spaces of interest in the learning space.
具身团队学习是一种团体学习形式,发生在同地环境中,学生需要与他人互动,同时积极利用物理学习空间中的资源来实现共同目标。在这种情况下,沟通动态可能是复杂的,因为团队话语片段可能在物理空间的不同位置以不同的团队成员配置并行发生。这使得教师很难评估团队合作的有效性,学生也很难反思自己的经历。为了解决这一问题,我们提出了METS (Multimodal Embodied Teamwork Signature)方法,该方法结合时空数据对团队对话内容进行建模,从而生成团队的体现签名。我们在一个高度动态的医疗团队模拟空间的背景下进行了一项研究,学生可以自由移动。我们说明了具体化团队合作的特征如何帮助识别高绩效和低绩效团队之间的关键差异:i)贯穿整个学习过程;Ii)在不同的学习阶段;iii)在学习空间中特定的兴趣空间。
{"title":"METS: Multimodal Learning Analytics of Embodied Teamwork Learning","authors":"Linxuan Zhao, Z. Swiecki, D. Gašević, Lixiang Yan, S. Dix, Hollie Jaggard, Rosie Wotherspoon, Abra Osborne, Xinyu Li, Riordan Alfredo, Roberto Martínez-Maldonado","doi":"10.1145/3576050.3576076","DOIUrl":"https://doi.org/10.1145/3576050.3576076","url":null,"abstract":"Embodied team learning is a form of group learning that occurs in co-located settings where students need to interact with others while actively using resources in the physical learning space to achieve a common goal. In such situations, communication dynamics can be complex as team discourse segments can happen in parallel at different locations of the physical space with varied team member configurations. This can make it hard for teachers to assess the effectiveness of teamwork and for students to reflect on their own experiences. To address this problem, we propose METS (Multimodal Embodied Teamwork Signature), a method to model team dialogue content in combination with spatial and temporal data to generate a signature of embodied teamwork. We present a study in the context of a highly dynamic healthcare team simulation space where students can freely move. We illustrate how signatures of embodied teamwork can help to identify key differences between high and low performing teams: i) across the whole learning session; ii) at different phases of learning sessions; and iii) at particular spaces of interest in the learning space.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131176551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Impact of Non-Cognitive Interventions on Student Learning Behaviors and Outcomes: An analysis of seven large-scale experimental inventions 非认知干预对学生学习行为和结果的影响:七项大型实验发明的分析
Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576073
Kirk P. Vanacore, Ashish Gurung, Andrew Mcreynolds, Allison S. Liu, S. Shaw, N. Heffernan
As evidence grows supporting the importance of non-cognitive factors in learning, computer-assisted learning platforms increasingly incorporate non-academic interventions to influence student learning and learning related-behaviors. Non-cognitive interventions often attempt to influence students’ mindset, motivation, or metacognitive reflection to impact learning behaviors and outcomes. In the current paper, we analyze data from five experiments, involving seven treatment conditions embedded in mastery-based learning activities hosted on a computer-assisted learning platform focused on middle school mathematics. Each treatment condition embodied a specific non-cognitive theoretical perspective. Over seven school years, 20,472 students participated in the experiments. We estimated the effects of each treatment condition on students’ response time, hint usage, likelihood of mastering knowledge components, learning efficiency, and post-tests performance. Our analyses reveal a mix of both positive and negative treatment effects on student learning behaviors and performance. Few interventions impacted learning as assessed by the post-tests. These findings highlight the difficulty in positively influencing student learning behaviors and outcomes using non-cognitive interventions.
随着越来越多的证据支持非认知因素在学习中的重要性,计算机辅助学习平台越来越多地纳入非学术干预来影响学生的学习和学习相关行为。非认知干预往往试图影响学生的心态、动机或元认知反思,从而影响学习行为和结果。在本文中,我们分析了五个实验的数据,这些实验涉及在以中学数学为重点的计算机辅助学习平台上进行的基于掌握的学习活动中的七种治疗条件。每个治疗条件都体现了一个特定的非认知理论视角。在7个学年里,有20,472名学生参加了实验。我们估计了每个处理条件对学生的反应时间、提示使用、掌握知识成分的可能性、学习效率和测试后表现的影响。我们的分析揭示了积极和消极的治疗对学生学习行为和表现的影响。根据后测试的评估,很少有干预措施影响学习。这些发现强调了使用非认知干预来积极影响学生学习行为和结果的困难。
{"title":"Impact of Non-Cognitive Interventions on Student Learning Behaviors and Outcomes: An analysis of seven large-scale experimental inventions","authors":"Kirk P. Vanacore, Ashish Gurung, Andrew Mcreynolds, Allison S. Liu, S. Shaw, N. Heffernan","doi":"10.1145/3576050.3576073","DOIUrl":"https://doi.org/10.1145/3576050.3576073","url":null,"abstract":"As evidence grows supporting the importance of non-cognitive factors in learning, computer-assisted learning platforms increasingly incorporate non-academic interventions to influence student learning and learning related-behaviors. Non-cognitive interventions often attempt to influence students’ mindset, motivation, or metacognitive reflection to impact learning behaviors and outcomes. In the current paper, we analyze data from five experiments, involving seven treatment conditions embedded in mastery-based learning activities hosted on a computer-assisted learning platform focused on middle school mathematics. Each treatment condition embodied a specific non-cognitive theoretical perspective. Over seven school years, 20,472 students participated in the experiments. We estimated the effects of each treatment condition on students’ response time, hint usage, likelihood of mastering knowledge components, learning efficiency, and post-tests performance. Our analyses reveal a mix of both positive and negative treatment effects on student learning behaviors and performance. Few interventions impacted learning as assessed by the post-tests. These findings highlight the difficulty in positively influencing student learning behaviors and outcomes using non-cognitive interventions.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134152197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards Automated Analysis of Rhetorical Categories in Students Essay Writings using Bloom’s Taxonomy 用布鲁姆分类法自动分析学生作文中的修辞范畴
Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576112
Sehrish Iqbal, Mladen Raković, Guanliang Chen, Tongguang Li, Rafael Ferreira Mello, Yizhou Fan, G. Fiorentino, Naif Radi Aljohani, D. Gašević
Essay writing has become one of the most common learning tasks assigned to students enrolled in various courses at different educational levels, owing to the growing demand for future professionals to effectively communicate information to an audience and develop a written product (i.e. essay). Evaluating a written product requires scorers who manually examine the existence of rhetorical categories, which is a time-consuming task. Machine Learning (ML) approaches have the potential to alleviate this challenge. As a result, several attempts have been made in the literature to automate the identification of rhetorical categories using Rhetorical Structure Theory (RST). However, RST do not provide information regarding students’ cognitive level, which motivates the use of Bloom’s Taxonomy. Therefore, in this research we propose to: i) investigate the extent to which classification of rhetorical categories can be automated based on Bloom’s taxonomy by comparing the traditional ML classifiers with the pre-trained language model BERT, ii) explore the associations between rhetorical categories and writing performance. Our results showed that BERT model outperformed the traditional ML-based classifiers with 18% better accuracy, indicating it can be used in future analytics tool. Moreover, we found a statistical difference between the associations of rhetorical categories in low-achiever, medium-achiever and high-achiever groups which implies that rhetorical categories can be predictive of writing performance.
论文写作已成为分配给不同教育水平的各种课程的学生最常见的学习任务之一,因为对未来专业人员有效地向观众传达信息并开发书面产品(即论文)的需求不断增长。评估一篇书面文章需要阅卷人手动检查修辞类别的存在,这是一项耗时的任务。机器学习(ML)方法有可能缓解这一挑战。因此,文献中多次尝试使用修辞结构理论(RST)来自动识别修辞类别。然而,RST并没有提供关于学生认知水平的信息,这促使了布鲁姆分类法的使用。因此,在本研究中,我们建议:i)通过比较传统ML分类器与预训练语言模型BERT,研究基于Bloom分类法的修辞类别分类自动化程度;ii)探索修辞类别与写作表现之间的关联。我们的研究结果表明,BERT模型的准确率比传统的基于ml的分类器高出18%,表明它可以用于未来的分析工具。此外,我们还发现,在低、中、高成就群体中,修辞类别的关联存在统计学差异,这意味着修辞类别可以预测写作表现。
{"title":"Towards Automated Analysis of Rhetorical Categories in Students Essay Writings using Bloom’s Taxonomy","authors":"Sehrish Iqbal, Mladen Raković, Guanliang Chen, Tongguang Li, Rafael Ferreira Mello, Yizhou Fan, G. Fiorentino, Naif Radi Aljohani, D. Gašević","doi":"10.1145/3576050.3576112","DOIUrl":"https://doi.org/10.1145/3576050.3576112","url":null,"abstract":"Essay writing has become one of the most common learning tasks assigned to students enrolled in various courses at different educational levels, owing to the growing demand for future professionals to effectively communicate information to an audience and develop a written product (i.e. essay). Evaluating a written product requires scorers who manually examine the existence of rhetorical categories, which is a time-consuming task. Machine Learning (ML) approaches have the potential to alleviate this challenge. As a result, several attempts have been made in the literature to automate the identification of rhetorical categories using Rhetorical Structure Theory (RST). However, RST do not provide information regarding students’ cognitive level, which motivates the use of Bloom’s Taxonomy. Therefore, in this research we propose to: i) investigate the extent to which classification of rhetorical categories can be automated based on Bloom’s taxonomy by comparing the traditional ML classifiers with the pre-trained language model BERT, ii) explore the associations between rhetorical categories and writing performance. Our results showed that BERT model outperformed the traditional ML-based classifiers with 18% better accuracy, indicating it can be used in future analytics tool. Moreover, we found a statistical difference between the associations of rhetorical categories in low-achiever, medium-achiever and high-achiever groups which implies that rhetorical categories can be predictive of writing performance.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132568106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recurrence Quantification Analysis of Eye Gaze Dynamics During Team Collaboration 团队协作中眼球注视动态的递归量化分析
Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576113
R. Moulder, Brandon M. Booth, Angelina Abitino, Sidney K. D’Mello
Shared visual attention between team members facilitates collaborative problem solving (CPS), but little is known about how team-level eye gaze dynamics influence the quality and successfulness of CPS. To better understand the role of shared visual attention during CPS, we collected eye gaze data from 279 individuals solving computer-based physics puzzles while in teams of three. We converted eye gaze into discrete screen locations and quantified team-level gaze dynamics using recurrence quantification analysis (RQA). Specifically, we used a centroid-based auto-RQA approach, a pairwise team member cross-RQAs approach, and a multi-dimensional RQA approach to quantify team-level eye gaze dynamics from the eye gaze data of team members. We find that teams differing in composition based on prior task knowledge, gender, and race show few differences in team-level eye gaze dynamics. We also find that RQA metrics of team-level eye gaze dynamics were predictive of task success (all ps < .001). However, the same metrics showed different patterns of feature importance depending on predictive model and RQA type, suggesting some redundancy in task-relevant information. These findings signify that team-level eye gaze dynamics play an important role in CPS and that different forms of RQA pick up on unique aspects of shared attention between team-members.
团队成员之间共享的视觉注意力促进了协作性问题解决(CPS),但对于团队层面的目光动态如何影响CPS的质量和成功却知之甚少。为了更好地理解CPS过程中共享视觉注意力的作用,我们收集了279名三人一组的人在解决基于计算机的物理谜题时的眼睛注视数据。我们将眼睛凝视转换为离散的屏幕位置,并使用递归量化分析(RQA)量化团队层面的凝视动态。具体而言,我们使用了基于质心的自动RQA方法、两两团队成员交叉RQA方法和多维RQA方法,从团队成员的眼睛注视数据中量化团队层面的眼睛注视动态。我们发现,基于先前任务知识、性别和种族的不同组成的团队在团队层面的目光动态上几乎没有差异。我们还发现,团队级眼睛注视动态的RQA指标可以预测任务成功(所有ps < 0.001)。然而,根据预测模型和RQA类型,相同的度量显示了不同的特征重要性模式,这表明任务相关信息中存在一些冗余。这些发现表明,团队层面的眼球注视动态在CPS中起着重要作用,不同形式的RQA在团队成员之间共享注意力的独特方面发挥作用。
{"title":"Recurrence Quantification Analysis of Eye Gaze Dynamics During Team Collaboration","authors":"R. Moulder, Brandon M. Booth, Angelina Abitino, Sidney K. D’Mello","doi":"10.1145/3576050.3576113","DOIUrl":"https://doi.org/10.1145/3576050.3576113","url":null,"abstract":"Shared visual attention between team members facilitates collaborative problem solving (CPS), but little is known about how team-level eye gaze dynamics influence the quality and successfulness of CPS. To better understand the role of shared visual attention during CPS, we collected eye gaze data from 279 individuals solving computer-based physics puzzles while in teams of three. We converted eye gaze into discrete screen locations and quantified team-level gaze dynamics using recurrence quantification analysis (RQA). Specifically, we used a centroid-based auto-RQA approach, a pairwise team member cross-RQAs approach, and a multi-dimensional RQA approach to quantify team-level eye gaze dynamics from the eye gaze data of team members. We find that teams differing in composition based on prior task knowledge, gender, and race show few differences in team-level eye gaze dynamics. We also find that RQA metrics of team-level eye gaze dynamics were predictive of task success (all ps < .001). However, the same metrics showed different patterns of feature importance depending on predictive model and RQA type, suggesting some redundancy in task-relevant information. These findings signify that team-level eye gaze dynamics play an important role in CPS and that different forms of RQA pick up on unique aspects of shared attention between team-members.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130729247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using Transformer Language Models to Validate Peer-Assigned Essay Scores in Massive Open Online Courses (MOOCs) 使用转换语言模型验证大规模在线开放课程(MOOCs)中同伴分配的论文分数
Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576098
Wesley Morris, S. Crossley, Langdon Holmes, Anne Trumbore
Massive Open Online Courses (MOOCs) such as those offered by Coursera are popular ways for adults to gain important skills, advance their careers, and pursue their interests. Within these courses, students are often required to compose, submit, and peer review written essays, providing a valuable pedagogical experience for the student and a wealth of natural language data for the educational researcher. However, the scores provided by peers do not always reflect the actual quality of the text, generating questions about the reliability and validity of the scores. This study evaluates methods to increase the reliability of MOOC peer-review ratings through a series of validation tests on peer-reviewed essays. Reliability of reviewers was based on correlations between text length and essay quality. Raters were pruned based on score variance and the lexical diversity observed in their comments to create sub-sets of raters. Each subset was then used as training data to finetune distilBERT large language models to automatically score essay quality as a measure of validation. The accuracy of each language model for each subset was evaluated. We find that training language models on data subsets produced by more reliable raters based on a combination of score variance and lexical diversity produce more accurate essay scoring models. The approach developed in this study should allow for enhanced reliability of peer-reviewed scoring in MOOCS affording greater credibility within the systems.
大规模在线开放课程(MOOCs),如Coursera提供的课程,是成年人获得重要技能、发展事业和追求兴趣的流行方式。在这些课程中,学生经常被要求撰写、提交和同行评审书面论文,为学生提供宝贵的教学经验,为教育研究者提供丰富的自然语言数据。然而,同行提供的分数并不总是反映文本的实际质量,这就产生了对分数的可靠性和有效性的质疑。本研究通过对同行评议论文的一系列验证测试来评估提高MOOC同行评议评分可靠性的方法。审稿人的可靠性是基于文本长度和文章质量之间的相关性。根据评分差异和在他们的评论中观察到的词汇多样性来修剪评分者,以创建评分者的子集。然后将每个子集用作训练数据,以微调蒸馏器大型语言模型,自动对文章质量进行评分,作为验证的衡量标准。对每个子集的每种语言模型的准确性进行了评估。我们发现,基于分数方差和词汇多样性的组合,在更可靠的评分者产生的数据子集上训练语言模型,可以产生更准确的论文评分模型。本研究开发的方法可以提高mooc中同行评议评分的可靠性,在系统内提供更大的可信度。
{"title":"Using Transformer Language Models to Validate Peer-Assigned Essay Scores in Massive Open Online Courses (MOOCs)","authors":"Wesley Morris, S. Crossley, Langdon Holmes, Anne Trumbore","doi":"10.1145/3576050.3576098","DOIUrl":"https://doi.org/10.1145/3576050.3576098","url":null,"abstract":"Massive Open Online Courses (MOOCs) such as those offered by Coursera are popular ways for adults to gain important skills, advance their careers, and pursue their interests. Within these courses, students are often required to compose, submit, and peer review written essays, providing a valuable pedagogical experience for the student and a wealth of natural language data for the educational researcher. However, the scores provided by peers do not always reflect the actual quality of the text, generating questions about the reliability and validity of the scores. This study evaluates methods to increase the reliability of MOOC peer-review ratings through a series of validation tests on peer-reviewed essays. Reliability of reviewers was based on correlations between text length and essay quality. Raters were pruned based on score variance and the lexical diversity observed in their comments to create sub-sets of raters. Each subset was then used as training data to finetune distilBERT large language models to automatically score essay quality as a measure of validation. The accuracy of each language model for each subset was evaluated. We find that training language models on data subsets produced by more reliable raters based on a combination of score variance and lexical diversity produce more accurate essay scoring models. The approach developed in this study should allow for enhanced reliability of peer-reviewed scoring in MOOCS affording greater credibility within the systems.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132068801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TikTok as Learning Analytics Data: Framing Climate Change and Data Practices TikTok作为学习分析数据:框架气候变化和数据实践
Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576055
Ha Nguyen
Climate change has far-reaching impacts on communities around the world. However, climate change education has more often focused on scientific facts and statistics at a global scale than experiences at personal and local scales. To understand how to frame climate change education, I turn to youth-created videos on TikTok—a video-sharing, social media platform. Semantic network analysis of hashtags related to climate change reveals multifaceted, intertwining discourse around awareness of climate change consequences, call for action to reduce human impacts on natural systems, and environmental activism. I further explore how youth integrate personal, lived experiences data into climate change discussions. A higher usage of second-person perspective ("you"; i.e., addressing the audience), prosocial and agency words, and negative messaging tone are associated with higher odds of a video integrating lived experiences. These findings illustrate the platform’s affordances: In communicating to a broad audience, youth take on agency and pro-social stances and express emotions to relate to viewers and situate their content. Findings suggest the utility of learning analytics to explore youth’s perspectives and provide insights to frame climate change education in ways that elevate lived experiences.
气候变化对世界各地的社区产生了深远的影响。然而,气候变化教育往往更侧重于全球范围内的科学事实和统计数据,而不是个人和地方层面的经验。为了了解如何构建气候变化教育,我转向了视频分享社交媒体平台tiktok上年轻人制作的视频。对气候变化相关标签的语义网络分析揭示了围绕气候变化后果的认识、呼吁采取行动减少人类对自然系统的影响以及环境行动主义等多方面的、相互交织的话语。我进一步探讨了年轻人如何将个人生活经历数据融入气候变化讨论。第二人称视角(“你”;例如,向观众讲话),亲社会和代理语言,以及消极的信息语调与视频整合生活经历的可能性较高有关。这些发现说明了该平台的启示:在与广大受众交流时,年轻人采取代理和亲社会的立场,表达情感,与观众联系,并定位他们的内容。研究结果表明,学习分析可以探索青年的观点,并为以提升生活经验的方式构建气候变化教育提供见解。
{"title":"TikTok as Learning Analytics Data: Framing Climate Change and Data Practices","authors":"Ha Nguyen","doi":"10.1145/3576050.3576055","DOIUrl":"https://doi.org/10.1145/3576050.3576055","url":null,"abstract":"Climate change has far-reaching impacts on communities around the world. However, climate change education has more often focused on scientific facts and statistics at a global scale than experiences at personal and local scales. To understand how to frame climate change education, I turn to youth-created videos on TikTok—a video-sharing, social media platform. Semantic network analysis of hashtags related to climate change reveals multifaceted, intertwining discourse around awareness of climate change consequences, call for action to reduce human impacts on natural systems, and environmental activism. I further explore how youth integrate personal, lived experiences data into climate change discussions. A higher usage of second-person perspective (\"you\"; i.e., addressing the audience), prosocial and agency words, and negative messaging tone are associated with higher odds of a video integrating lived experiences. These findings illustrate the platform’s affordances: In communicating to a broad audience, youth take on agency and pro-social stances and express emotions to relate to viewers and situate their content. Findings suggest the utility of learning analytics to explore youth’s perspectives and provide insights to frame climate change education in ways that elevate lived experiences.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120940296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Protected Attributes Tell Us Who, Behavior Tells Us How: A Comparison of Demographic and Behavioral Oversampling for Fair Student Success Modeling 受保护的属性告诉我们是谁,行为告诉我们如何:公平学生成功模型的人口统计学和行为过采样的比较
Pub Date : 2022-12-20 DOI: 10.1145/3576050.3576149
J. Cock, Muhammad Bilal, Richard Davis, M. Marras, Tanja Kaser
Algorithms deployed in education can shape the learning experience and success of a student. It is therefore important to understand whether and how such algorithms might create inequalities or amplify existing biases. In this paper, we analyze the fairness of models which use behavioral data to identify at-risk students and suggest two novel pre-processing approaches for bias mitigation. Based on the concept of intersectionality, the first approach involves intelligent oversampling on combinations of demographic attributes. The second approach does not require any knowledge of demographic attributes and is based on the assumption that such attributes are a (noisy) proxy for student behavior. We hence propose to directly oversample different types of behaviors identified in a cluster analysis. We evaluate our approaches on data from (i) an open-ended learning environment and (ii) a flipped classroom course. Our results show that both approaches can mitigate model bias. Directly oversampling on behavior is a valuable alternative, when demographic metadata is not available. Source code and extended results are provided in https://github.com/epfl-ml4ed/behavioral-oversampling.
在教育中部署的算法可以塑造学生的学习经历和成功。因此,了解这些算法是否以及如何可能造成不平等或放大现有偏见是很重要的。在本文中,我们分析了使用行为数据来识别有风险学生的模型的公平性,并提出了两种新的预处理方法来减轻偏见。基于交叉性的概念,第一种方法涉及对人口统计属性组合的智能过采样。第二种方法不需要任何人口统计属性的知识,并且基于这样的假设,即这些属性是学生行为的(嘈杂的)代理。因此,我们建议直接对聚类分析中确定的不同类型的行为进行抽样。我们根据(i)开放式学习环境和(ii)翻转课堂课程的数据评估我们的方法。我们的研究结果表明,这两种方法都可以减轻模型偏差。当人口统计数据不可用时,直接对行为进行过采样是一种有价值的选择。源代码和扩展结果在https://github.com/epfl-ml4ed/behavioral-oversampling中提供。
{"title":"Protected Attributes Tell Us Who, Behavior Tells Us How: A Comparison of Demographic and Behavioral Oversampling for Fair Student Success Modeling","authors":"J. Cock, Muhammad Bilal, Richard Davis, M. Marras, Tanja Kaser","doi":"10.1145/3576050.3576149","DOIUrl":"https://doi.org/10.1145/3576050.3576149","url":null,"abstract":"Algorithms deployed in education can shape the learning experience and success of a student. It is therefore important to understand whether and how such algorithms might create inequalities or amplify existing biases. In this paper, we analyze the fairness of models which use behavioral data to identify at-risk students and suggest two novel pre-processing approaches for bias mitigation. Based on the concept of intersectionality, the first approach involves intelligent oversampling on combinations of demographic attributes. The second approach does not require any knowledge of demographic attributes and is based on the assumption that such attributes are a (noisy) proxy for student behavior. We hence propose to directly oversample different types of behaviors identified in a cluster analysis. We evaluate our approaches on data from (i) an open-ended learning environment and (ii) a flipped classroom course. Our results show that both approaches can mitigate model bias. Directly oversampling on behavior is a valuable alternative, when demographic metadata is not available. Source code and extended results are provided in https://github.com/epfl-ml4ed/behavioral-oversampling.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131067832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Insights into undergraduate pathways using course load analytics 利用课程负荷分析洞察本科生的学习途径
Pub Date : 2022-12-20 DOI: 10.1145/3576050.3576081
Conrad Borchers, Z. Pardos
Course load analytics (CLA) inferred from LMS and enrollment features can offer a more accurate representation of course workload to students than credit hours and potentially aid in their course selection decisions. In this study, we produce and evaluate the first machine-learned predictions of student course load ratings and generalize our model to the full 10,000 course catalog of a large public university. We then retrospectively analyze longitudinal differences in the semester load of student course selections throughout their degree. CLA by semester shows that a student’s first semester at the university is among their highest load semesters, as opposed to a credit hour-based analysis, which would indicate it is among their lowest. Investigating what role predicted course load may play in program retention, we find that students who maintain a semester load that is low as measured by credit hours but high as measured by CLA are more likely to leave their program of study. This discrepancy in course load is particularly pertinent in STEM and associated with high prerequisite courses. Our findings have implications for academic advising, institutional handling of the freshman experience, and student-facing analytics to help students better plan, anticipate, and prepare for their selected courses.
从LMS和注册特征推断的课程负荷分析(CLA)可以为学生提供比学分更准确的课程负荷表示,并可能有助于他们的课程选择决策。在这项研究中,我们产生并评估了第一个学生课程负荷评级的机器学习预测,并将我们的模型推广到一所大型公立大学的全部10,000门课程目录。然后,我们回顾性地分析了学生在整个学位期间选修课程的学期负荷的纵向差异。按学期计算的CLA表明,学生在大学的第一学期是他们负担最高的学期之一,而不是基于学分的分析,这将表明它是他们负担最低的学期之一。调查预测的课程负荷在课程保留中可能发挥的作用,我们发现,以学分衡量的学期负荷低但以CLA衡量的学期负荷高的学生更有可能离开他们的学习项目。这种课程负担的差异在STEM中尤为明显,并且与高先决条件课程有关。我们的研究结果对学术建议、机构对新生经历的处理以及面向学生的分析都有影响,以帮助学生更好地计划、预测和准备他们所选的课程。
{"title":"Insights into undergraduate pathways using course load analytics","authors":"Conrad Borchers, Z. Pardos","doi":"10.1145/3576050.3576081","DOIUrl":"https://doi.org/10.1145/3576050.3576081","url":null,"abstract":"Course load analytics (CLA) inferred from LMS and enrollment features can offer a more accurate representation of course workload to students than credit hours and potentially aid in their course selection decisions. In this study, we produce and evaluate the first machine-learned predictions of student course load ratings and generalize our model to the full 10,000 course catalog of a large public university. We then retrospectively analyze longitudinal differences in the semester load of student course selections throughout their degree. CLA by semester shows that a student’s first semester at the university is among their highest load semesters, as opposed to a credit hour-based analysis, which would indicate it is among their lowest. Investigating what role predicted course load may play in program retention, we find that students who maintain a semester load that is low as measured by credit hours but high as measured by CLA are more likely to leave their program of study. This discrepancy in course load is particularly pertinent in STEM and associated with high prerequisite courses. Our findings have implications for academic advising, institutional handling of the freshman experience, and student-facing analytics to help students better plan, anticipate, and prepare for their selected courses.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122901174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design 信任解释器:课程设计中可解释人工智能的教师验证
Pub Date : 2022-12-17 DOI: 10.1145/3576050.3576147
Vinitra Swamy, Sijia Du, M. Marras, Tanja Kaser
Deep learning models for learning analytics have become increasingly popular over the last few years; however, these approaches are still not widely adopted in real-world settings, likely due to a lack of trust and transparency. In this paper, we tackle this issue by implementing explainable AI methods for black-box neural networks. This work focuses on the context of online and blended learning and the use case of student success prediction models. We use a pairwise study design, enabling us to investigate controlled differences between pairs of courses. Our analyses cover five course pairs that differ in one educationally relevant aspect and two popular instance-based explainable AI methods (LIME and SHAP). We quantitatively compare the distances between the explanations across courses and methods. We then validate the explanations of LIME and SHAP with 26 semi-structured interviews of university-level educators regarding which features they believe contribute most to student success, which explanations they trust most, and how they could transform these insights into actionable course design decisions. Our results show that quantitatively, explainers significantly disagree with each other about what is important, and qualitatively, experts themselves do not agree on which explanations are most trustworthy. All code, extended results, and the interview protocol are provided at https://github.com/epfl-ml4ed/trusting-explainers.
在过去的几年里,用于学习分析的深度学习模型变得越来越流行;然而,这些方法在现实世界中仍然没有被广泛采用,可能是由于缺乏信任和透明度。在本文中,我们通过为黑盒神经网络实现可解释的人工智能方法来解决这个问题。这项工作的重点是在线和混合学习的背景和学生成功预测模型的用例。我们采用两两研究设计,使我们能够调查两两课程之间的受控差异。我们的分析涵盖了五个课程对,它们在一个教育相关方面和两种流行的基于实例的可解释人工智能方法(LIME和SHAP)上存在差异。我们定量地比较不同课程和方法的解释之间的距离。然后,我们通过26个对大学教育工作者的半结构化访谈来验证LIME和SHAP的解释,了解他们认为哪些特征对学生的成功贡献最大,他们最信任哪些解释,以及他们如何将这些见解转化为可操作的课程设计决策。我们的结果表明,从数量上看,解释者对什么是重要的意见分歧很大,从质量上看,专家们自己也不同意哪些解释是最值得信赖的。所有的代码,扩展的结果,和采访协议提供在https://github.com/epfl-ml4ed/trusting-explainers。
{"title":"Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design","authors":"Vinitra Swamy, Sijia Du, M. Marras, Tanja Kaser","doi":"10.1145/3576050.3576147","DOIUrl":"https://doi.org/10.1145/3576050.3576147","url":null,"abstract":"Deep learning models for learning analytics have become increasingly popular over the last few years; however, these approaches are still not widely adopted in real-world settings, likely due to a lack of trust and transparency. In this paper, we tackle this issue by implementing explainable AI methods for black-box neural networks. This work focuses on the context of online and blended learning and the use case of student success prediction models. We use a pairwise study design, enabling us to investigate controlled differences between pairs of courses. Our analyses cover five course pairs that differ in one educationally relevant aspect and two popular instance-based explainable AI methods (LIME and SHAP). We quantitatively compare the distances between the explanations across courses and methods. We then validate the explanations of LIME and SHAP with 26 semi-structured interviews of university-level educators regarding which features they believe contribute most to student success, which explanations they trust most, and how they could transform these insights into actionable course design decisions. Our results show that quantitatively, explainers significantly disagree with each other about what is important, and qualitatively, experts themselves do not agree on which explanations are most trustworthy. All code, extended results, and the interview protocol are provided at https://github.com/epfl-ml4ed/trusting-explainers.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122709604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Do Not Trust a Model Because It is Confident: Uncovering and Characterizing Unknown Unknowns to Student Success Predictors in Online-Based Learning 不要相信一个模型,因为它是自信的:发现和描述未知的未知因素对在线学习中学生成功的预测
Pub Date : 2022-12-16 DOI: 10.1145/3576050.3576148
Roberta Galici, Tanja Kaser, G. Fenu, M. Marras
Student success models might be prone to develop weak spots, i.e., examples hard to accurately classify due to insufficient representation during model creation. This weakness is one of the main factors undermining users’ trust, since model predictions could for instance lead an instructor to not intervene on a student in need. In this paper, we unveil the need of detecting and characterizing unknown unknowns in student success prediction in order to better understand when models may fail. Unknown unknowns include the students for which the model is highly confident in its predictions, but is actually wrong. Therefore, we cannot solely rely on the model’s confidence when evaluating the predictions quality. We first introduce a framework for the identification and characterization of unknown unknowns. We then assess its informativeness on log data collected from flipped courses and online courses using quantitative analyses and interviews with instructors. Our results show that unknown unknowns are a critical issue in this domain and that our framework can be applied to support their detection. The source code is available at https://github.com/epfl-ml4ed/unknown-unknowns.
学生成功模型可能容易出现弱点,即在模型创建过程中,由于代表性不足,样本难以准确分类。这一弱点是破坏用户信任的主要因素之一,因为模型预测可能导致教师不干预有需要的学生。在本文中,我们揭示了在学生成功预测中检测和表征未知未知数的必要性,以便更好地理解模型何时可能失败。未知的未知数包括模型对其预测非常有信心,但实际上是错误的学生。因此,在评估预测质量时,我们不能仅仅依靠模型的置信度。我们首先介绍了一个未知未知的识别和表征框架。然后,我们利用定量分析和对教师的访谈,对从翻转课程和在线课程收集的日志数据进行信息评估。我们的研究结果表明,未知的未知数是该领域的一个关键问题,我们的框架可以应用于支持它们的检测。源代码可从https://github.com/epfl-ml4ed/unknown-unknowns获得。
{"title":"Do Not Trust a Model Because It is Confident: Uncovering and Characterizing Unknown Unknowns to Student Success Predictors in Online-Based Learning","authors":"Roberta Galici, Tanja Kaser, G. Fenu, M. Marras","doi":"10.1145/3576050.3576148","DOIUrl":"https://doi.org/10.1145/3576050.3576148","url":null,"abstract":"Student success models might be prone to develop weak spots, i.e., examples hard to accurately classify due to insufficient representation during model creation. This weakness is one of the main factors undermining users’ trust, since model predictions could for instance lead an instructor to not intervene on a student in need. In this paper, we unveil the need of detecting and characterizing unknown unknowns in student success prediction in order to better understand when models may fail. Unknown unknowns include the students for which the model is highly confident in its predictions, but is actually wrong. Therefore, we cannot solely rely on the model’s confidence when evaluating the predictions quality. We first introduce a framework for the identification and characterization of unknown unknowns. We then assess its informativeness on log data collected from flipped courses and online courses using quantitative analyses and interviews with instructors. Our results show that unknown unknowns are a critical issue in this domain and that our framework can be applied to support their detection. The source code is available at https://github.com/epfl-ml4ed/unknown-unknowns.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127076503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
LAK23: 13th International Learning Analytics and Knowledge Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1