了解技术强化学习的有效性标准:系统文献综述

IF 8.9 1区 教育学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Computers & Education Pub Date : 2024-07-30 DOI:10.1016/j.compedu.2024.105128
Max van Haastrecht , Marcel Haas , Matthieu Brinkhuis , Marco Spruit
{"title":"了解技术强化学习的有效性标准:系统文献综述","authors":"Max van Haastrecht ,&nbsp;Marcel Haas ,&nbsp;Matthieu Brinkhuis ,&nbsp;Marco Spruit","doi":"10.1016/j.compedu.2024.105128","DOIUrl":null,"url":null,"abstract":"<div><p>Technological aids are ubiquitous in today's educational environments. Whereas much of the dust has settled in the debate on how to validate traditional educational solutions, in the area of technology-enhanced learning (TEL) many questions still remain. Technologies often abstract away student behaviour by condensing actions into numbers, meaning teachers have to assess student data rather than observing students directly. With the rapid adoption of artificial intelligence in education, it is timely to obtain a clear image of the landscape of validity criteria relevant to TEL. In this paper, we conduct a systematic review of research on TEL interventions, where we combine active learning for title and abstract screening with a backward snowballing phase. We extract information on the validity criteria used to evaluate TEL solutions, along with the methods employed to measure these criteria. By combining data on the research methods (qualitative versus quantitative) and knowledge source (theory versus practice) used to inform validity criteria, we ground our results epistemologically. We find that validity criteria tend to be assessed more positively when quantitative methods are used and that validation framework usage is both rare and fragmented. Yet, we also find that the prevalence of different validity criteria and the research methods used to assess them are relatively stable over time, implying that a strong foundation exists to design holistic validation frameworks with the potential to become commonplace in TEL research.</p></div>","PeriodicalId":10568,"journal":{"name":"Computers & Education","volume":"220 ","pages":"Article 105128"},"PeriodicalIF":8.9000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0360131524001428/pdfft?md5=f93c6daa6b462c605f6701b4d87bb4d6&pid=1-s2.0-S0360131524001428-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Understanding validity criteria in technology-enhanced learning: A systematic literature review\",\"authors\":\"Max van Haastrecht ,&nbsp;Marcel Haas ,&nbsp;Matthieu Brinkhuis ,&nbsp;Marco Spruit\",\"doi\":\"10.1016/j.compedu.2024.105128\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Technological aids are ubiquitous in today's educational environments. Whereas much of the dust has settled in the debate on how to validate traditional educational solutions, in the area of technology-enhanced learning (TEL) many questions still remain. Technologies often abstract away student behaviour by condensing actions into numbers, meaning teachers have to assess student data rather than observing students directly. With the rapid adoption of artificial intelligence in education, it is timely to obtain a clear image of the landscape of validity criteria relevant to TEL. In this paper, we conduct a systematic review of research on TEL interventions, where we combine active learning for title and abstract screening with a backward snowballing phase. We extract information on the validity criteria used to evaluate TEL solutions, along with the methods employed to measure these criteria. By combining data on the research methods (qualitative versus quantitative) and knowledge source (theory versus practice) used to inform validity criteria, we ground our results epistemologically. We find that validity criteria tend to be assessed more positively when quantitative methods are used and that validation framework usage is both rare and fragmented. Yet, we also find that the prevalence of different validity criteria and the research methods used to assess them are relatively stable over time, implying that a strong foundation exists to design holistic validation frameworks with the potential to become commonplace in TEL research.</p></div>\",\"PeriodicalId\":10568,\"journal\":{\"name\":\"Computers & Education\",\"volume\":\"220 \",\"pages\":\"Article 105128\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2024-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0360131524001428/pdfft?md5=f93c6daa6b462c605f6701b4d87bb4d6&pid=1-s2.0-S0360131524001428-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0360131524001428\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Education","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0360131524001428","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

在当今的教育环境中,技术辅助手段无处不在。尽管关于如何验证传统教育解决方案的争论已经尘埃落定,但在技术辅助学习(TEL)领域,许多问题依然存在。技术往往将学生的行为抽象为数字,这意味着教师必须评估学生的数据,而不是直接观察学生。随着人工智能在教育领域的快速应用,现在正是清晰了解与技术辅助学习相关的有效性标准的好时机。在本文中,我们对有关远程教育干预措施的研究进行了系统性回顾,并将主动学习与反向滚雪球阶段相结合,对标题和摘要进行筛选。我们提取了用于评估 TEL 解决方案的有效性标准信息,以及用于衡量这些标准的方法。通过将研究方法(定性与定量)和知识来源(理论与实践)方面的数据结合起来,我们从认识论的角度为我们的结果奠定了基础。我们发现,在使用定量方法时,有效性标准往往得到更积极的评价,而验证框架的使用既罕见又分散。然而,我们还发现,不同效度标准的普遍性以及用于评估这些标准的研究方法随着时间的推移相对稳定,这意味着存在着设计整体效度框架的坚实基础,而这种框架有可能在电信技术研究中得到普及。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Understanding validity criteria in technology-enhanced learning: A systematic literature review

Technological aids are ubiquitous in today's educational environments. Whereas much of the dust has settled in the debate on how to validate traditional educational solutions, in the area of technology-enhanced learning (TEL) many questions still remain. Technologies often abstract away student behaviour by condensing actions into numbers, meaning teachers have to assess student data rather than observing students directly. With the rapid adoption of artificial intelligence in education, it is timely to obtain a clear image of the landscape of validity criteria relevant to TEL. In this paper, we conduct a systematic review of research on TEL interventions, where we combine active learning for title and abstract screening with a backward snowballing phase. We extract information on the validity criteria used to evaluate TEL solutions, along with the methods employed to measure these criteria. By combining data on the research methods (qualitative versus quantitative) and knowledge source (theory versus practice) used to inform validity criteria, we ground our results epistemologically. We find that validity criteria tend to be assessed more positively when quantitative methods are used and that validation framework usage is both rare and fragmented. Yet, we also find that the prevalence of different validity criteria and the research methods used to assess them are relatively stable over time, implying that a strong foundation exists to design holistic validation frameworks with the potential to become commonplace in TEL research.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers & Education
Computers & Education 工程技术-计算机:跨学科应用
CiteScore
27.10
自引率
5.80%
发文量
204
审稿时长
42 days
期刊介绍: Computers & Education seeks to advance understanding of how digital technology can improve education by publishing high-quality research that expands both theory and practice. The journal welcomes research papers exploring the pedagogical applications of digital technology, with a focus broad enough to appeal to the wider education community.
期刊最新文献
Personalization in educational gamification: Learners with different trait competitiveness benefit differently from rankings on leaderboards Reducing interpretative ambiguity in an educational environment with ChatGPT Editorial Board “Storytelling and educational robotics: A scoping review (2004–2024)” Advancing a Practical Inquiry Model with multi-perspective role-playing to foster critical thinking behavior in e-book reading
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1