Making AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis

IF 2.1 2区 工程技术 Q2 EDUCATION, SCIENTIFIC DISCIPLINES IEEE Transactions on Education Pub Date : 2024-07-19 DOI:10.1109/TE.2024.3421606
Deliang Wang;Gaowei Chen
{"title":"Making AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis","authors":"Deliang Wang;Gaowei Chen","doi":"10.1109/TE.2024.3421606","DOIUrl":null,"url":null,"abstract":"Contributions: To address the interpretability issues in artificial intelligence (AI)-powered classroom discourse models, we employ explainable AI methods to unpack classroom discourse analysis from deep learning-based models and evaluate the effects of model explanations on STEM teachers. Background: Deep learning techniques have been used to automatically analyze classroom dialogue to provide feedback for teachers. However, these complex models operate as black boxes, lacking clear explanations of the analysis, which may lead teachers, particularly those lacking AI knowledge, to distrust the models and hinder their teaching practice. Therefore, it is crucial to address the interpretability issue in AI-powered classroom discourse models. Research Questions: How to explain deep learning-based classroom discourse models using explainable AI methods? What is the effect of these explanations on teachers’ trust in and technology acceptance of the models? How do teachers perceive the explanations of deep learning-based classroom discourse models? Method: Two explainable AI methods were employed to interpret deep learning-based models that analyzed teacher and student talk moves. A pilot study was conducted, involving seven STEM teachers interested in learning talk moves and receiving classroom discourse analysis. The study assessed changes in teachers’ trust and technology acceptance before and after receiving model explanations. Teachers’ perceptions of the model explanations were investigated. Findings: The AI-powered classroom discourse models were effectively explained using explainable AI methods. The model explanations enhanced teachers’ trust and technology acceptance of the classroom discourse models. The seven STEM teachers expressed satisfaction with the explanations and provided their perception of model explanations.","PeriodicalId":55011,"journal":{"name":"IEEE Transactions on Education","volume":"67 6","pages":"907-918"},"PeriodicalIF":2.1000,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Education","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10605115/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Contributions: To address the interpretability issues in artificial intelligence (AI)-powered classroom discourse models, we employ explainable AI methods to unpack classroom discourse analysis from deep learning-based models and evaluate the effects of model explanations on STEM teachers. Background: Deep learning techniques have been used to automatically analyze classroom dialogue to provide feedback for teachers. However, these complex models operate as black boxes, lacking clear explanations of the analysis, which may lead teachers, particularly those lacking AI knowledge, to distrust the models and hinder their teaching practice. Therefore, it is crucial to address the interpretability issue in AI-powered classroom discourse models. Research Questions: How to explain deep learning-based classroom discourse models using explainable AI methods? What is the effect of these explanations on teachers’ trust in and technology acceptance of the models? How do teachers perceive the explanations of deep learning-based classroom discourse models? Method: Two explainable AI methods were employed to interpret deep learning-based models that analyzed teacher and student talk moves. A pilot study was conducted, involving seven STEM teachers interested in learning talk moves and receiving classroom discourse analysis. The study assessed changes in teachers’ trust and technology acceptance before and after receiving model explanations. Teachers’ perceptions of the model explanations were investigated. Findings: The AI-powered classroom discourse models were effectively explained using explainable AI methods. The model explanations enhanced teachers’ trust and technology acceptance of the classroom discourse models. The seven STEM teachers expressed satisfaction with the explanations and provided their perception of model explanations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
让科学、技术、工程和数学教师了解人工智能:使用可解释的人工智能解读课堂话语分析
贡献:为了解决人工智能(AI)驱动的课堂话语模型中的可解释性问题,我们采用可解释的AI方法从基于深度学习的模型中解析课堂话语分析,并评估模型解释对STEM教师的影响。背景:深度学习技术已被用于自动分析课堂对话,为教师提供反馈。然而,这些复杂的模型像黑盒子一样运行,缺乏对分析的清晰解释,这可能导致教师,特别是那些缺乏人工智能知识的教师,不信任模型并阻碍他们的教学实践。因此,解决人工智能课堂话语模型的可解释性问题至关重要。研究问题:如何使用可解释的人工智能方法来解释基于深度学习的课堂话语模型?这些解释对教师对模型的信任和技术接受度有什么影响?教师如何看待基于深度学习的课堂话语模型的解释?方法:采用两种可解释的人工智能方法来解释基于深度学习的模型,分析教师和学生的谈话动作。进行了一项试点研究,涉及七名对学习谈话动作感兴趣的STEM教师,并接受课堂话语分析。本研究评估了接受模型解释前后教师的信任和技术接受度的变化。调查了教师对模型解释的看法。研究结果:使用可解释的AI方法有效地解释了人工智能驱动的课堂话语模型。模型解释增强了教师对课堂话语模型的信任和技术接受度。7位STEM教师对这些解释表示满意,并提供了他们对模型解释的看法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Education
IEEE Transactions on Education 工程技术-工程:电子与电气
CiteScore
5.80
自引率
7.70%
发文量
90
审稿时长
1 months
期刊介绍: The IEEE Transactions on Education (ToE) publishes significant and original scholarly contributions to education in electrical and electronics engineering, computer engineering, computer science, and other fields within the scope of interest of IEEE. Contributions must address discovery, integration, and/or application of knowledge in education in these fields. Articles must support contributions and assertions with compelling evidence and provide explicit, transparent descriptions of the processes through which the evidence is collected, analyzed, and interpreted. While characteristics of compelling evidence cannot be described to address every conceivable situation, generally assessment of the work being reported must go beyond student self-report and attitudinal data.
期刊最新文献
Table of Contents IEEE Transactions on Education Publication Information Guest Editorial Coding, Computational, Algorithmic, Design, Creative, and Critical Thinking in K–16 Education IEEE Transactions on Education Information for Authors The Universal Micro-Credential Framework: The Role of Badges, Micro-Credentials, Skills Profiling, and Design Patterns in Developing Interdisciplinary Learning and Assessment Paths for Computing Education
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1