可解释的学习分析:通过可解释人工智能评估学生成功预测模型的稳定性

IF 6.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Decision Support Systems Pub Date : 2024-04-26 DOI:10.1016/j.dss.2024.114229
Elena Tiukhova , Pavani Vemuri , Nidia López Flores , Anna Sigridur Islind , María Óskarsdóttir , Stephan Poelmans , Bart Baesens , Monique Snoeck
{"title":"可解释的学习分析:通过可解释人工智能评估学生成功预测模型的稳定性","authors":"Elena Tiukhova ,&nbsp;Pavani Vemuri ,&nbsp;Nidia López Flores ,&nbsp;Anna Sigridur Islind ,&nbsp;María Óskarsdóttir ,&nbsp;Stephan Poelmans ,&nbsp;Bart Baesens ,&nbsp;Monique Snoeck","doi":"10.1016/j.dss.2024.114229","DOIUrl":null,"url":null,"abstract":"<div><p>Beyond managing student dropout, higher education stakeholders need decision support to consistently influence the student learning process to keep students motivated, engaged, and successful. At the course level, the combination of predictive analytics and self-regulation theory can help instructors determine the best study advice and allow learners to better self-regulate and determine how they want to learn. The best performing techniques are often black-box models that favor performance over interpretability and are heavily influenced by course contexts. In this study, we argue that explainable AI has the potential not only to uncover the reasons behind model decisions, but also to reveal their stability across contexts, effectively bridging the gap between predictive and explanatory learning analytics (LA). In contributing to decision support systems research, this study (1) leverages traditional techniques, such as concept drift and performance drift, to investigate the stability of student success prediction models over time; (2) uses Shapley Additive explanations in a novel way to explore the stability of extracted feature importance rankings generated for these models; (3) generates new insights that emerge from stable features across cohorts, enabling teachers to determine study advice. We believe this study makes a strong contribution to education research at large and expands the field of LA by augmenting the interpretability and explainability of prediction algorithms and ensuring their applicability in changing contexts.</p></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"182 ","pages":"Article 114229"},"PeriodicalIF":6.7000,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable Learning Analytics: Assessing the stability of student success prediction models by means of explainable AI\",\"authors\":\"Elena Tiukhova ,&nbsp;Pavani Vemuri ,&nbsp;Nidia López Flores ,&nbsp;Anna Sigridur Islind ,&nbsp;María Óskarsdóttir ,&nbsp;Stephan Poelmans ,&nbsp;Bart Baesens ,&nbsp;Monique Snoeck\",\"doi\":\"10.1016/j.dss.2024.114229\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Beyond managing student dropout, higher education stakeholders need decision support to consistently influence the student learning process to keep students motivated, engaged, and successful. At the course level, the combination of predictive analytics and self-regulation theory can help instructors determine the best study advice and allow learners to better self-regulate and determine how they want to learn. The best performing techniques are often black-box models that favor performance over interpretability and are heavily influenced by course contexts. In this study, we argue that explainable AI has the potential not only to uncover the reasons behind model decisions, but also to reveal their stability across contexts, effectively bridging the gap between predictive and explanatory learning analytics (LA). In contributing to decision support systems research, this study (1) leverages traditional techniques, such as concept drift and performance drift, to investigate the stability of student success prediction models over time; (2) uses Shapley Additive explanations in a novel way to explore the stability of extracted feature importance rankings generated for these models; (3) generates new insights that emerge from stable features across cohorts, enabling teachers to determine study advice. We believe this study makes a strong contribution to education research at large and expands the field of LA by augmenting the interpretability and explainability of prediction algorithms and ensuring their applicability in changing contexts.</p></div>\",\"PeriodicalId\":55181,\"journal\":{\"name\":\"Decision Support Systems\",\"volume\":\"182 \",\"pages\":\"Article 114229\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-04-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Decision Support Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167923624000629\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Support Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167923624000629","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

除了管理学生辍学问题之外,高等教育利益相关者还需要决策支持来持续影响学生的学习过程,以保持学生的积极性、参与度和成功率。在课程层面,预测分析和自我调节理论的结合可以帮助教师确定最佳学习建议,让学生更好地进行自我调节,确定自己的学习方式。性能最好的技术往往是黑箱模型,它们偏重性能而非可解释性,并且深受课程背景的影响。在本研究中,我们认为可解释的人工智能不仅有可能揭示模型决策背后的原因,还能揭示它们在不同情境下的稳定性,从而有效地弥合预测性学习分析(LA)和解释性学习分析(LA)之间的差距。为了促进决策支持系统研究,本研究(1)利用传统技术,如概念漂移和成绩漂移,来研究学生成功预测模型随时间变化的稳定性;(2)以一种新颖的方式使用夏普利加法解释,来探索为这些模型生成的提取特征重要性排名的稳定性;(3)从跨群组的稳定特征中产生新的见解,从而使教师能够确定学习建议。我们相信,这项研究通过增强预测算法的可解释性和可说明性,确保其在不断变化的环境中的适用性,为整个教育研究做出了巨大贡献,并拓展了LA领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Explainable Learning Analytics: Assessing the stability of student success prediction models by means of explainable AI

Beyond managing student dropout, higher education stakeholders need decision support to consistently influence the student learning process to keep students motivated, engaged, and successful. At the course level, the combination of predictive analytics and self-regulation theory can help instructors determine the best study advice and allow learners to better self-regulate and determine how they want to learn. The best performing techniques are often black-box models that favor performance over interpretability and are heavily influenced by course contexts. In this study, we argue that explainable AI has the potential not only to uncover the reasons behind model decisions, but also to reveal their stability across contexts, effectively bridging the gap between predictive and explanatory learning analytics (LA). In contributing to decision support systems research, this study (1) leverages traditional techniques, such as concept drift and performance drift, to investigate the stability of student success prediction models over time; (2) uses Shapley Additive explanations in a novel way to explore the stability of extracted feature importance rankings generated for these models; (3) generates new insights that emerge from stable features across cohorts, enabling teachers to determine study advice. We believe this study makes a strong contribution to education research at large and expands the field of LA by augmenting the interpretability and explainability of prediction algorithms and ensuring their applicability in changing contexts.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Decision Support Systems
Decision Support Systems 工程技术-计算机:人工智能
CiteScore
14.70
自引率
6.70%
发文量
119
审稿时长
13 months
期刊介绍: The common thread of articles published in Decision Support Systems is their relevance to theoretical and technical issues in the support of enhanced decision making. The areas addressed may include foundations, functionality, interfaces, implementation, impacts, and evaluation of decision support systems (DSSs).
期刊最新文献
A comparative analysis of the effect of initiative risk statement versus passive risk disclosure on the financing performance of Kickstarter campaigns DeepSecure: A computational design science approach for interpretable threat hunting in cybersecurity decision making Editorial Board Effects of visual-preview and information-sidedness features on website persuasiveness The evolution of organizations and stakeholders for metaverse ecosystems: Editorial for the special issue on metaverse part 1
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1