Automated feedback improves teachers’ questioning quality in brick-and-mortar classrooms: Opportunities for further enhancement

IF 8.9 1区 教育学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Computers & Education Pub Date : 2024-11-12 DOI:10.1016/j.compedu.2024.105183
Dorottya Demszky , Jing Liu , Heather C. Hill , Shyamoli Sanghi , Ariel Chung
{"title":"Automated feedback improves teachers’ questioning quality in brick-and-mortar classrooms: Opportunities for further enhancement","authors":"Dorottya Demszky ,&nbsp;Jing Liu ,&nbsp;Heather C. Hill ,&nbsp;Shyamoli Sanghi ,&nbsp;Ariel Chung","doi":"10.1016/j.compedu.2024.105183","DOIUrl":null,"url":null,"abstract":"<div><div>AI-powered professional learning tools that provide teachers with individualized feedback on their instruction have proven effective at improving instruction and student engagement in virtual learning contexts. Despite the need for consistent, personalized professional learning in K-12 settings, the effectiveness of automated feedback tools in traditional classrooms remains unexplored. We present results from 224 Utah mathematics and science teachers who engaged in a pre-registered randomized controlled trial, conducted in partnership with TeachFX, to assess the impact of automated feedback in K-12 classrooms. This feedback targeted “focusing questions” — questions that probe students’ thinking by pressing for explanations and reflection. We find that teachers opened emails containing the automated feedback about 53–65% of the time, and the feedback increased their use of focusing questions by 20% (<em>p</em> &lt; 0.01) compared to the control group. The feedback did not impact other teaching practices. Qualitative interviews with 13 teachers revealed mixed perceptions of the automated feedback. Some teachers appreciated the reflective insights, while others faced barriers such as skepticism about accuracy, data privacy concerns, and time constraints. Our findings highlight the promises and areas of improvement for implementing effective and teacher-friendly automated professional learning tools in brick-and-mortar classrooms.</div></div>","PeriodicalId":10568,"journal":{"name":"Computers & Education","volume":"227 ","pages":"Article 105183"},"PeriodicalIF":8.9000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Education","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0360131524001970","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

AI-powered professional learning tools that provide teachers with individualized feedback on their instruction have proven effective at improving instruction and student engagement in virtual learning contexts. Despite the need for consistent, personalized professional learning in K-12 settings, the effectiveness of automated feedback tools in traditional classrooms remains unexplored. We present results from 224 Utah mathematics and science teachers who engaged in a pre-registered randomized controlled trial, conducted in partnership with TeachFX, to assess the impact of automated feedback in K-12 classrooms. This feedback targeted “focusing questions” — questions that probe students’ thinking by pressing for explanations and reflection. We find that teachers opened emails containing the automated feedback about 53–65% of the time, and the feedback increased their use of focusing questions by 20% (p < 0.01) compared to the control group. The feedback did not impact other teaching practices. Qualitative interviews with 13 teachers revealed mixed perceptions of the automated feedback. Some teachers appreciated the reflective insights, while others faced barriers such as skepticism about accuracy, data privacy concerns, and time constraints. Our findings highlight the promises and areas of improvement for implementing effective and teacher-friendly automated professional learning tools in brick-and-mortar classrooms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
Computers & Education
Computers & Education 工程技术-计算机:跨学科应用
CiteScore
27.10
自引率
5.80%
发文量
204
审稿时长
42 days
期刊介绍: Computers & Education seeks to advance understanding of how digital technology can improve education by publishing high-quality research that expands both theory and practice. The journal welcomes research papers exploring the pedagogical applications of digital technology, with a focus broad enough to appeal to the wider education community.
期刊最新文献
Understanding college students’ cross-device learning behavior in the wild: Device ecologies, physical configurations, usage patterns, and attention issues Editorial Board The synergistic effects in an AI-supported online scientific argumentation learning environment Explicit video-based instruction enhanced students’ online credibility evaluation skills: Did storifying instruction matter? Can AI support human grading? Examining machine attention and confidence in short answer scoring
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1