Max van Haastrecht , Marcel Haas , Matthieu Brinkhuis , Marco Spruit
{"title":"了解技术强化学习的有效性标准:系统文献综述","authors":"Max van Haastrecht , Marcel Haas , Matthieu Brinkhuis , Marco Spruit","doi":"10.1016/j.compedu.2024.105128","DOIUrl":null,"url":null,"abstract":"<div><p>Technological aids are ubiquitous in today's educational environments. Whereas much of the dust has settled in the debate on how to validate traditional educational solutions, in the area of technology-enhanced learning (TEL) many questions still remain. Technologies often abstract away student behaviour by condensing actions into numbers, meaning teachers have to assess student data rather than observing students directly. With the rapid adoption of artificial intelligence in education, it is timely to obtain a clear image of the landscape of validity criteria relevant to TEL. In this paper, we conduct a systematic review of research on TEL interventions, where we combine active learning for title and abstract screening with a backward snowballing phase. We extract information on the validity criteria used to evaluate TEL solutions, along with the methods employed to measure these criteria. By combining data on the research methods (qualitative versus quantitative) and knowledge source (theory versus practice) used to inform validity criteria, we ground our results epistemologically. We find that validity criteria tend to be assessed more positively when quantitative methods are used and that validation framework usage is both rare and fragmented. Yet, we also find that the prevalence of different validity criteria and the research methods used to assess them are relatively stable over time, implying that a strong foundation exists to design holistic validation frameworks with the potential to become commonplace in TEL research.</p></div>","PeriodicalId":10568,"journal":{"name":"Computers & Education","volume":"220 ","pages":"Article 105128"},"PeriodicalIF":8.9000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0360131524001428/pdfft?md5=f93c6daa6b462c605f6701b4d87bb4d6&pid=1-s2.0-S0360131524001428-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Understanding validity criteria in technology-enhanced learning: A systematic literature review\",\"authors\":\"Max van Haastrecht , Marcel Haas , Matthieu Brinkhuis , Marco Spruit\",\"doi\":\"10.1016/j.compedu.2024.105128\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Technological aids are ubiquitous in today's educational environments. Whereas much of the dust has settled in the debate on how to validate traditional educational solutions, in the area of technology-enhanced learning (TEL) many questions still remain. Technologies often abstract away student behaviour by condensing actions into numbers, meaning teachers have to assess student data rather than observing students directly. With the rapid adoption of artificial intelligence in education, it is timely to obtain a clear image of the landscape of validity criteria relevant to TEL. In this paper, we conduct a systematic review of research on TEL interventions, where we combine active learning for title and abstract screening with a backward snowballing phase. We extract information on the validity criteria used to evaluate TEL solutions, along with the methods employed to measure these criteria. By combining data on the research methods (qualitative versus quantitative) and knowledge source (theory versus practice) used to inform validity criteria, we ground our results epistemologically. We find that validity criteria tend to be assessed more positively when quantitative methods are used and that validation framework usage is both rare and fragmented. Yet, we also find that the prevalence of different validity criteria and the research methods used to assess them are relatively stable over time, implying that a strong foundation exists to design holistic validation frameworks with the potential to become commonplace in TEL research.</p></div>\",\"PeriodicalId\":10568,\"journal\":{\"name\":\"Computers & Education\",\"volume\":\"220 \",\"pages\":\"Article 105128\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2024-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0360131524001428/pdfft?md5=f93c6daa6b462c605f6701b4d87bb4d6&pid=1-s2.0-S0360131524001428-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0360131524001428\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Education","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0360131524001428","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
摘要
在当今的教育环境中,技术辅助手段无处不在。尽管关于如何验证传统教育解决方案的争论已经尘埃落定,但在技术辅助学习(TEL)领域,许多问题依然存在。技术往往将学生的行为抽象为数字,这意味着教师必须评估学生的数据,而不是直接观察学生。随着人工智能在教育领域的快速应用,现在正是清晰了解与技术辅助学习相关的有效性标准的好时机。在本文中,我们对有关远程教育干预措施的研究进行了系统性回顾,并将主动学习与反向滚雪球阶段相结合,对标题和摘要进行筛选。我们提取了用于评估 TEL 解决方案的有效性标准信息,以及用于衡量这些标准的方法。通过将研究方法(定性与定量)和知识来源(理论与实践)方面的数据结合起来,我们从认识论的角度为我们的结果奠定了基础。我们发现,在使用定量方法时,有效性标准往往得到更积极的评价,而验证框架的使用既罕见又分散。然而,我们还发现,不同效度标准的普遍性以及用于评估这些标准的研究方法随着时间的推移相对稳定,这意味着存在着设计整体效度框架的坚实基础,而这种框架有可能在电信技术研究中得到普及。
Understanding validity criteria in technology-enhanced learning: A systematic literature review
Technological aids are ubiquitous in today's educational environments. Whereas much of the dust has settled in the debate on how to validate traditional educational solutions, in the area of technology-enhanced learning (TEL) many questions still remain. Technologies often abstract away student behaviour by condensing actions into numbers, meaning teachers have to assess student data rather than observing students directly. With the rapid adoption of artificial intelligence in education, it is timely to obtain a clear image of the landscape of validity criteria relevant to TEL. In this paper, we conduct a systematic review of research on TEL interventions, where we combine active learning for title and abstract screening with a backward snowballing phase. We extract information on the validity criteria used to evaluate TEL solutions, along with the methods employed to measure these criteria. By combining data on the research methods (qualitative versus quantitative) and knowledge source (theory versus practice) used to inform validity criteria, we ground our results epistemologically. We find that validity criteria tend to be assessed more positively when quantitative methods are used and that validation framework usage is both rare and fragmented. Yet, we also find that the prevalence of different validity criteria and the research methods used to assess them are relatively stable over time, implying that a strong foundation exists to design holistic validation frameworks with the potential to become commonplace in TEL research.
期刊介绍:
Computers & Education seeks to advance understanding of how digital technology can improve education by publishing high-quality research that expands both theory and practice. The journal welcomes research papers exploring the pedagogical applications of digital technology, with a focus broad enough to appeal to the wider education community.