Matisse Poupard, Florian Larrue, Hélène Sauzéon, André Tricot
Immersive technologies are assumed to have many benefits for learning due to their potential positive impact on optimizing learners' cognitive load and fostering intrinsic motivation. However, despite promising results, the findings regarding the actual impact on learning remain inconclusive, raising questions about the determinants of efficacy. To address these gaps, we conducted a PRISMA systematic review to investigate the contributions and limitations of virtual reality (VR) and augmented reality (AR) in learning, specifically by examining their effects on cognitive load and intrinsic motivations. Through the application of an analytical grid, we systematically classified the impact of VR/AR on the causal relationship between learning performance (ie, objective learning improvement) and cognitive load or motivation, while respecting the fundamental assumptions of the main theories related to these factors. Analysing 36 studies, the findings reveal that VR, often causing extraneous load, hinders learning, particularly among novices. In contrast, AR optimizes cognitive load, proving beneficial for novice learners but demonstrating less effectiveness for intermediate learners. The effects on intrinsic motivation remain inconclusive, likely due to variations in measurement methods. The review underscores the need for detailed, sophisticated evaluations and comprehensive frameworks that consider both cognitive load and intrinsic motivation to improve understanding of the impact of immersive technologies on learning. What is know Virtual and augmented reality show promise for education, but findings are inconsistent. Existing studies suggest that augmented reality optimizes learners' cognitive load. The literature often asserts that VR and AR are expected to enhance learning motivation. Adding VR introduces unnecessary cognitive load, while AR proves effective for learning performance and cognitive load, particularly for novice learners. The impact of AR and VR on motivation to learn is unclear. Our analytical grid offers a comprehensive framework for assessing the effects of AR and VR on learning outcomes. Implications AR is more suitable than VR for education concerning cognitive load. The cost/benefit balance of VR should be carefully considered before implementation, especially for novice learners. Rigorous studies on motivation to learn in AR and VR contexts are essential.
由于沉浸式技术对优化学习者的认知负荷和培养内在动力具有潜在的积极影响,因此被认为对学习有诸多益处。然而,尽管取得了可喜的成果,但有关其对学习的实际影响的研究结果仍然没有定论,从而引发了有关功效决定因素的问题。为了填补这些空白,我们进行了一项 PRISMA 系统综述,研究虚拟现实(VR)和增强现实(AR)在学习中的贡献和局限性,特别是通过研究它们对认知负荷和内在动机的影响。通过应用分析网格,我们对虚拟现实/增强现实对学习成绩(即客观的学习进步)和认知负荷或动机之间因果关系的影响进行了系统分类,同时尊重了与这些因素相关的主要理论的基本假设。通过对 36 项研究进行分析,研究结果表明,虚拟现实往往会造成额外负担,从而阻碍学习,尤其是新手的学习。与此相反,AR 可优化认知负荷,证明对新手学习者有益,但对中级学习者的效果较差。对内在动机的影响仍然没有定论,这可能是由于测量方法的不同造成的。现有的研究表明,增强现实技术能优化学习者的认知负荷。我们的分析网格为评估 AR 和 VR 对学习成果的影响提供了一个综合框架。
{"title":"A systematic review of immersive technologies for education: Learning performance, cognitive load and intrinsic motivation","authors":"Matisse Poupard, Florian Larrue, Hélène Sauzéon, André Tricot","doi":"10.1111/bjet.13503","DOIUrl":"https://doi.org/10.1111/bjet.13503","url":null,"abstract":"Immersive technologies are assumed to have many benefits for learning due to their potential positive impact on optimizing learners' cognitive load and fostering intrinsic motivation. However, despite promising results, the findings regarding the actual impact on learning remain inconclusive, raising questions about the determinants of efficacy. To address these gaps, we conducted a PRISMA systematic review to investigate the contributions and limitations of virtual reality (VR) and augmented reality (AR) in learning, specifically by examining their effects on cognitive load and intrinsic motivations. Through the application of an analytical grid, we systematically classified the impact of VR/AR on the causal relationship between learning performance (ie, objective learning improvement) and cognitive load or motivation, while respecting the fundamental assumptions of the main theories related to these factors. Analysing 36 studies, the findings reveal that VR, often causing extraneous load, hinders learning, particularly among novices. In contrast, AR optimizes cognitive load, proving beneficial for novice learners but demonstrating less effectiveness for intermediate learners. The effects on intrinsic motivation remain inconclusive, likely due to variations in measurement methods. The review underscores the need for detailed, sophisticated evaluations and comprehensive frameworks that consider both cognitive load and intrinsic motivation to improve understanding of the impact of immersive technologies on learning.\u0000What is know\u0000\u0000Virtual and augmented reality show promise for education, but findings are inconsistent.\u0000Existing studies suggest that augmented reality optimizes learners' cognitive load.\u0000The literature often asserts that VR and AR are expected to enhance learning motivation.\u0000Adding\u0000\u0000VR introduces unnecessary cognitive load, while AR proves effective for learning performance and cognitive load, particularly for novice learners.\u0000The impact of AR and VR on motivation to learn is unclear.\u0000Our analytical grid offers a comprehensive framework for assessing the effects of AR and VR on learning outcomes.\u0000Implications\u0000\u0000AR is more suitable than VR for education concerning cognitive load.\u0000The cost/benefit balance of VR should be carefully considered before implementation, especially for novice learners.\u0000Rigorous studies on motivation to learn in AR and VR contexts are essential.\u0000\u0000","PeriodicalId":505245,"journal":{"name":"British Journal of Educational Technology","volume":"83 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141643326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanyuan Hu, Pieter Wouters, Marieke van der Schaaf, Liesbeth Kester
Learning with games requires two types of information, namely domain‐specific information and game‐specific information. Presenting these two types of information together with gameplay may pose a heavy demand on cognitive resources. This study investigates how timing of information presentation affects cognition (ie, mental effort and performance), motivation (ie, achievement goals) and emotion (ie, achievement emotions). Participants were secondary school students (N = 145). Participants participated in a 2 × 2 factorial experiment with two factors—timing of domain‐specific information presentation and timing of game‐specific information presentation, either before or during gameplay. We measured mental effort, chemistry knowledge, time on task, achievement goals and achievement emotions. Multiple regression and robust regression revealed that presenting domain‐specific information before gameplay promoted higher approach goals, higher avoidance goals and more enjoyment than presenting it during gameplay. There was no difference between presenting game‐specific information before gameplay and during gameplay except for performance‐avoidance goals. We conclude that timing of information presentation affects motivational and emotional processes and outcomes and that students feel more motivated and enjoyed when domain‐specific information is presented before learning than during learning. Educators may change the timing of domain‐specific information presentation accordingly. What is already known about this topic Well‐designed game‐based learning can increase learning. Game‐based learning needs effective instructional design features. What this paper adds One instructional design feature, timing of information presentation, affects motivation and emotion in game‐based learning. Students feel more motivated and enjoyed when domain‐specific information is presented before learning than during learning. This study is one of the first to focus on cognitive, motivational and emotional processes and outcomes, and their interconnections. Implications for practice and policy Educators would do well to present domain‐specific information before learning than during learning. Researchers on instructional design features should attend to all cognitive, motivational and emotional processes and outcomes instead of just one or two.
{"title":"Timing of information presentation matters: Effects on secondary school students' cognition, motivation and emotion in game‐based learning","authors":"Yuanyuan Hu, Pieter Wouters, Marieke van der Schaaf, Liesbeth Kester","doi":"10.1111/bjet.13510","DOIUrl":"https://doi.org/10.1111/bjet.13510","url":null,"abstract":"Learning with games requires two types of information, namely domain‐specific information and game‐specific information. Presenting these two types of information together with gameplay may pose a heavy demand on cognitive resources. This study investigates how timing of information presentation affects cognition (ie, mental effort and performance), motivation (ie, achievement goals) and emotion (ie, achievement emotions). Participants were secondary school students (N = 145). Participants participated in a 2 × 2 factorial experiment with two factors—timing of domain‐specific information presentation and timing of game‐specific information presentation, either before or during gameplay. We measured mental effort, chemistry knowledge, time on task, achievement goals and achievement emotions. Multiple regression and robust regression revealed that presenting domain‐specific information before gameplay promoted higher approach goals, higher avoidance goals and more enjoyment than presenting it during gameplay. There was no difference between presenting game‐specific information before gameplay and during gameplay except for performance‐avoidance goals. We conclude that timing of information presentation affects motivational and emotional processes and outcomes and that students feel more motivated and enjoyed when domain‐specific information is presented before learning than during learning. Educators may change the timing of domain‐specific information presentation accordingly.\u0000What is already known about this topic\u0000\u0000Well‐designed game‐based learning can increase learning.\u0000Game‐based learning needs effective instructional design features.\u0000What this paper adds\u0000\u0000One instructional design feature, timing of information presentation, affects motivation and emotion in game‐based learning.\u0000Students feel more motivated and enjoyed when domain‐specific information is presented before learning than during learning.\u0000This study is one of the first to focus on cognitive, motivational and emotional processes and outcomes, and their interconnections.\u0000Implications for practice and policy\u0000\u0000Educators would do well to present domain‐specific information before learning than during learning.\u0000Researchers on instructional design features should attend to all cognitive, motivational and emotional processes and outcomes instead of just one or two.\u0000\u0000","PeriodicalId":505245,"journal":{"name":"British Journal of Educational Technology","volume":"28 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141649438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates the validity and reliability of generative large language models (LLMs), specifically ChatGPT and Google's Bard, in grading student essays in higher education based on an analytical grading rubric. A total of 15 experienced English as a foreign language (EFL) instructors and two LLMs were asked to evaluate three student essays of varying quality. The grading scale comprised five domains: grammar, content, organization, style & expression and mechanics. The results revealed that fine‐tuned ChatGPT model demonstrated a very high level of reliability with an intraclass correlation (ICC) score of 0.972, Default ChatGPT model exhibited an ICC score of 0.947 and Bard showed a substantial level of reliability with an ICC score of 0.919. Additionally, a significant overlap was observed in certain domains when comparing the grades assigned by LLMs and human raters. In conclusion, the findings suggest that while LLMs demonstrated a notable consistency and potential for grading competency, further fine‐tuning and adjustment are needed for a more nuanced understanding of non‐objective essay criteria. The study not only offers insights into the potential use of LLMs in grading student essays but also highlights the need for continued development and research. What is already known about this topic Large language models (LLMs), such as OpenAI's ChatGPT and Google's Bard, are known for their ability to generate text that mimics human‐like conversation and writing. LLMs can perform various tasks, including essay grading. Intraclass correlation (ICC) is a statistical measure used to assess the reliability of ratings given by different raters (in this case, EFL instructors and LLMs). What this paper adds The study makes a unique contribution by directly comparing the grading performance of expert EFL instructors with two LLMs—ChatGPT and Bard—using an analytical grading scale. It provides robust empirical evidence showing high reliability of LLMs in grading essays, supported by high ICC scores. It specifically highlights that the overall efficacy of LLMs extends to certain domains of essay grading. Implications for practice and/or policyThe findings open up potential new avenues for utilizing LLMs in academic settings, particularly for grading student essays, thereby possibly alleviating workload of educators.The paper's insistence on the need for further fine‐tuning of LLMs underlines the continual interplay between technological advancement and its practical applications.The results lay down a footprint for future research in advancing the use of AI in essay grading.
{"title":"Utilizing large language models for EFL essay grading: An examination of reliability and validity in rubric‐based assessments","authors":"Fatih Yavuz, Özgür Çelik, Gamze Yavaş Çelik","doi":"10.1111/bjet.13494","DOIUrl":"https://doi.org/10.1111/bjet.13494","url":null,"abstract":"This study investigates the validity and reliability of generative large language models (LLMs), specifically ChatGPT and Google's Bard, in grading student essays in higher education based on an analytical grading rubric. A total of 15 experienced English as a foreign language (EFL) instructors and two LLMs were asked to evaluate three student essays of varying quality. The grading scale comprised five domains: grammar, content, organization, style & expression and mechanics. The results revealed that fine‐tuned ChatGPT model demonstrated a very high level of reliability with an intraclass correlation (ICC) score of 0.972, Default ChatGPT model exhibited an ICC score of 0.947 and Bard showed a substantial level of reliability with an ICC score of 0.919. Additionally, a significant overlap was observed in certain domains when comparing the grades assigned by LLMs and human raters. In conclusion, the findings suggest that while LLMs demonstrated a notable consistency and potential for grading competency, further fine‐tuning and adjustment are needed for a more nuanced understanding of non‐objective essay criteria. The study not only offers insights into the potential use of LLMs in grading student essays but also highlights the need for continued development and research.\u0000What is already known about this topic\u0000\u0000Large language models (LLMs), such as OpenAI's ChatGPT and Google's Bard, are known for their ability to generate text that mimics human‐like conversation and writing.\u0000LLMs can perform various tasks, including essay grading.\u0000Intraclass correlation (ICC) is a statistical measure used to assess the reliability of ratings given by different raters (in this case, EFL instructors and LLMs).\u0000What this paper adds\u0000\u0000The study makes a unique contribution by directly comparing the grading performance of expert EFL instructors with two LLMs—ChatGPT and Bard—using an analytical grading scale.\u0000It provides robust empirical evidence showing high reliability of LLMs in grading essays, supported by high ICC scores.\u0000It specifically highlights that the overall efficacy of LLMs extends to certain domains of essay grading.\u0000Implications for practice and/or policyThe findings open up potential new avenues for utilizing LLMs in academic settings, particularly for grading student essays, thereby possibly alleviating workload of educators.The paper's insistence on the need for further fine‐tuning of LLMs underlines the continual interplay between technological advancement and its practical applications.The results lay down a footprint for future research in advancing the use of AI in essay grading.\u0000","PeriodicalId":505245,"journal":{"name":"British Journal of Educational Technology","volume":"6 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141266099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dementia patients may have language barriers and decreased comprehension ability. Their family caregivers can feel frustrated when communicating with them. Poor communication hinders family caregivers from obtaining accurate health information about patients, and may also increase their emotional burden, affecting patient care quality. The present study developed a virtual reality‐based simulated communication training (VRSCT) system and applied it to a training course for family caregivers of dementia patients. It allowed family caregivers to simulate real‐world situations in a VR environment, experience the daily communication barriers and stress with dementia patients, and apply their acquired knowledge and skills to solve related problems. This study used a randomised control experimental design with mixed analysis methods. A total of 63 family caregivers were recruited and randomly divided into the experimental group (N = 32) learning with the VRSCT system to interact with virtual dementia patients and practice communication skills, and the control group (N = 31) using the traditional role‐playing method for practice. Quantitative data were analysed to determine participants' knowledge of dementia care, attitudes, communication confidence and skills. In addition, the qualitative method was used to analyse the participants' discussion records. The results showed that by using the VRSCT approach, participants significantly improved their knowledge of dementia care, attitudes, communication confidence and communication skills compared to the control group. In addition, participants reported that through the real‐time feedback of the VRSCT system, they could recognise their previous incorrect communication approach. As a result, they adjusted their communication strategies and increased their self‐confidence. What is already known about this topic Situational simulation helps learners improve their communication skills in a safe environment. Virtual reality (VR) creates a realistic, highly interactive learning environment, allowing users to be deeply immersed in the learning experience. What this paper adds This study proposed a VR‐based simulated communication training (VRSCT) approach; moreover, seven dementia cases of different degrees of severity were designed in the VR system to enable family members to experience possible challenges of taking care of dementia patients they might encounter in their daily lives. Each case in the VRSCT system has its unique symptoms and communication barriers. The learner in the story plays a caregiver, experiencing and solving the problems and challenges posed by the system. The experimental results show that the proposed method improves learners' knowledge, attitudes, communication confidence, and communication skills related to dementia care. Implications for practice and/or policy Utilising VR training can amplify awareness and secure enhanced social support for dementia‐related challenges. Using VRSCT
{"title":"Improving the quality of communicating with dementia patients: A virtual reality‐based simulated communication approach","authors":"Hui‐Chen Lin, Hsin Huang, Chia‐Kuang Tsai, Shao‐Chen Chang","doi":"10.1111/bjet.13497","DOIUrl":"https://doi.org/10.1111/bjet.13497","url":null,"abstract":"Dementia patients may have language barriers and decreased comprehension ability. Their family caregivers can feel frustrated when communicating with them. Poor communication hinders family caregivers from obtaining accurate health information about patients, and may also increase their emotional burden, affecting patient care quality. The present study developed a virtual reality‐based simulated communication training (VRSCT) system and applied it to a training course for family caregivers of dementia patients. It allowed family caregivers to simulate real‐world situations in a VR environment, experience the daily communication barriers and stress with dementia patients, and apply their acquired knowledge and skills to solve related problems. This study used a randomised control experimental design with mixed analysis methods. A total of 63 family caregivers were recruited and randomly divided into the experimental group (N = 32) learning with the VRSCT system to interact with virtual dementia patients and practice communication skills, and the control group (N = 31) using the traditional role‐playing method for practice. Quantitative data were analysed to determine participants' knowledge of dementia care, attitudes, communication confidence and skills. In addition, the qualitative method was used to analyse the participants' discussion records. The results showed that by using the VRSCT approach, participants significantly improved their knowledge of dementia care, attitudes, communication confidence and communication skills compared to the control group. In addition, participants reported that through the real‐time feedback of the VRSCT system, they could recognise their previous incorrect communication approach. As a result, they adjusted their communication strategies and increased their self‐confidence.\u0000What is already known about this topic\u0000\u0000Situational simulation helps learners improve their communication skills in a safe environment.\u0000Virtual reality (VR) creates a realistic, highly interactive learning environment, allowing users to be deeply immersed in the learning experience.\u0000What this paper adds\u0000\u0000This study proposed a VR‐based simulated communication training (VRSCT) approach; moreover, seven dementia cases of different degrees of severity were designed in the VR system to enable family members to experience possible challenges of taking care of dementia patients they might encounter in their daily lives.\u0000Each case in the VRSCT system has its unique symptoms and communication barriers. The learner in the story plays a caregiver, experiencing and solving the problems and challenges posed by the system.\u0000The experimental results show that the proposed method improves learners' knowledge, attitudes, communication confidence, and communication skills related to dementia care.\u0000Implications for practice and/or policy\u0000\u0000Utilising VR training can amplify awareness and secure enhanced social support for dementia‐related challenges.\u0000Using VRSCT","PeriodicalId":505245,"journal":{"name":"British Journal of Educational Technology","volume":"6 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141265827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigated the added value of real‐time adaptive feedback on seventh graders' performances in tablet‐based geometry learning. To isolate the effects of the medium (ie, tablet) from those of the feedback, three groups were compared: paper‐and‐pencil, pen‐based tablet without feedback and pen‐based tablet with feedback. The feedback was provided by a tutoring system based on an artificial intelligence that automatically interpreted students' pen strokes on the screen. A total of 85 French students drew three geometric shapes, either on paper or on a tablet, and then performed a transfer task on paper. Results showed that using a tablet without feedback did not improve learning but seemed to enhance interest in the task compared to the paper‐and‐pencil group. Students in the tablet with feedback group performed significantly better than the other two groups on learning, as well as on transfer. This study was the first to combine media comparison and added‐value approaches to test the effects on students' geometry performances of using a new educational app on a pen‐based tablet in a naturalistic classroom environment. Results showed that it was not the medium used but the intelligent tutoring system‐based feedback that improved students' performance. Our data therefore indicate that artificial intelligence is a promising way of providing learners with real‐time adaptive feedback in order to improve their performances. What is already known about this topic Previous meta‐analyses have investigated the effects of tablet‐based learning. Tablet computers have been proven to increase students' motivation. Yet, the influence of tablet computers on learning outcomes remains inconclusive. Other studies show that certain features of environments, such as feedback, have positive effects on learning. What this paper adds Most of the previous studies adopted a media comparison approach (paper‐ vs. tablet‐based instruction). We combine this approach with an added‐value approach by adding or not real‐time AI‐based feedback. Results showed that tablet use increased children's interest but not their learning outcomes. Feedback improved children's performance in a training task and a later transfer paper task. Implications for practice and/or policy Tablet computers can promote students' interest in the task during geometry instruction. App features play a critical role in improving students' learning. Specifically, IA‐based adaptive feedback helps children to perform better on a geometry task.
{"title":"What makes tablet‐based learning effective? A study of the role of real‐time adaptive feedback","authors":"Tiphaine Colliot, Omar Krichen, Nathalie Girard, Éric Anquetil, Éric Jamet","doi":"10.1111/bjet.13439","DOIUrl":"https://doi.org/10.1111/bjet.13439","url":null,"abstract":"This study investigated the added value of real‐time adaptive feedback on seventh graders' performances in tablet‐based geometry learning. To isolate the effects of the medium (ie, tablet) from those of the feedback, three groups were compared: paper‐and‐pencil, pen‐based tablet without feedback and pen‐based tablet with feedback. The feedback was provided by a tutoring system based on an artificial intelligence that automatically interpreted students' pen strokes on the screen. A total of 85 French students drew three geometric shapes, either on paper or on a tablet, and then performed a transfer task on paper. Results showed that using a tablet without feedback did not improve learning but seemed to enhance interest in the task compared to the paper‐and‐pencil group. Students in the tablet with feedback group performed significantly better than the other two groups on learning, as well as on transfer. This study was the first to combine media comparison and added‐value approaches to test the effects on students' geometry performances of using a new educational app on a pen‐based tablet in a naturalistic classroom environment. Results showed that it was not the medium used but the intelligent tutoring system‐based feedback that improved students' performance. Our data therefore indicate that artificial intelligence is a promising way of providing learners with real‐time adaptive feedback in order to improve their performances.\u0000What is already known about this topic\u0000\u0000Previous meta‐analyses have investigated the effects of tablet‐based learning.\u0000Tablet computers have been proven to increase students' motivation.\u0000Yet, the influence of tablet computers on learning outcomes remains inconclusive.\u0000Other studies show that certain features of environments, such as feedback, have positive effects on learning.\u0000What this paper adds\u0000\u0000Most of the previous studies adopted a media comparison approach (paper‐ vs. tablet‐based instruction).\u0000We combine this approach with an added‐value approach by adding or not real‐time AI‐based feedback.\u0000Results showed that tablet use increased children's interest but not their learning outcomes.\u0000Feedback improved children's performance in a training task and a later transfer paper task.\u0000Implications for practice and/or policy\u0000\u0000Tablet computers can promote students' interest in the task during geometry instruction.\u0000App features play a critical role in improving students' learning.\u0000Specifically, IA‐based adaptive feedback helps children to perform better on a geometry task.\u0000\u0000","PeriodicalId":505245,"journal":{"name":"British Journal of Educational Technology","volume":"14 S20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139794650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}