Traditional radiologic technology education primarily relies on lecture-based methods and provides limited hands-on opportunities due to equipment shortages, patient-safety concerns, and limited instructor availability. In this study, we developed an immersive virtual reality (VR) educational module simulating patient preparation for magnetic resonance imaging (MRI). We compared the immediate learning impact of the VR module with that of conventional materials using a randomized two-arm pre-test/post-test design. Twenty-eight second-year radiologic technology students were recruited and randomly assigned to two groups (n = 14 each). The VR module was developed using a 360-degree video capture of the MRI patient preparation process, edited using standard authoring software, and delivered in a virtual classroom platform via an Oculus Rift S headset. After a baseline 20-item pre-test, Group A used traditional materials and Group B experienced the VR module, followed by an immediate post-test. For educational fairness and comparative feedback, participants then crossed over to the alternate modality. Expert review yielded strong content validity (S-CVI = 0.9505) and acceptance (TAM = 4.72/5). Both groups improved from pre-test to post-test. At post-test, the VR group scored higher than the traditional group (independent : ; Mann–Whitney : ). Change scores did not differ significantly between groups. Students reported high satisfaction with VR (4.78/5). The VR development workflow enabled rapid, potentially low-cost deployment and is transferable to other hands-on procedures in radiologic technology curricula.
{"title":"Development, validation, and comparative evaluation of an immersive virtual reality for MRI patient preparation training in radiologic technology education","authors":"Jongwat Cheewakul , Suphalak Khamruang Marshall , Sitthichok Chaichulee","doi":"10.1016/j.cexr.2026.100134","DOIUrl":"10.1016/j.cexr.2026.100134","url":null,"abstract":"<div><div>Traditional radiologic technology education primarily relies on lecture-based methods and provides limited hands-on opportunities due to equipment shortages, patient-safety concerns, and limited instructor availability. In this study, we developed an immersive virtual reality (VR) educational module simulating patient preparation for magnetic resonance imaging (MRI). We compared the immediate learning impact of the VR module with that of conventional materials using a randomized two-arm pre-test/post-test design. Twenty-eight second-year radiologic technology students were recruited and randomly assigned to two groups (n = 14 each). The VR module was developed using a 360-degree video capture of the MRI patient preparation process, edited using standard authoring software, and delivered in a virtual classroom platform via an Oculus Rift S headset. After a baseline 20-item pre-test, Group A used traditional materials and Group B experienced the VR module, followed by an immediate post-test. For educational fairness and comparative feedback, participants then crossed over to the alternate modality. Expert review yielded strong content validity (S-CVI = 0.9505) and acceptance (TAM = 4.72/5). Both groups improved from pre-test to post-test. At post-test, the VR group scored higher than the traditional group (independent <span><math><mi>t</mi></math></span>: <span><math><mrow><mi>p</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>035</mn></mrow></math></span>; Mann–Whitney <span><math><mi>U</mi></math></span>: <span><math><mrow><mi>p</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>045</mn></mrow></math></span>). Change scores did not differ significantly between groups. Students reported high satisfaction with VR (4.78/5). The VR development workflow enabled rapid, potentially low-cost deployment and is transferable to other hands-on procedures in radiologic technology curricula.</div></div>","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"8 ","pages":"Article 100134"},"PeriodicalIF":0.0,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1016/j.cexr.2026.100133
Casey J. Rodgers , Jacob Henschen , Eric Shaffer , Ann C. Sychterz
In civil engineering, the ability to translate 2D building drawings to 3D models for structural building system analysis is a critical threshold concept. Virtual Reality (VR) technology may be able to address this challenge as existing studies have shown that VR produces positive learning outcomes and enhanced spatial skills. However, there is a limited amount of research on using VR to improve learning of 3D building structural load paths and analysis as well as civil engineering student comfort levels with VR in an educational setting. Therefore, the goals of this research were to: 1) assess the levels of comfort among students when VR technology is used as an educational tool, and 2) compare how the implementation of VR increases understanding of 3D structural engineering load path problems which are typically only represented in 2D. This research developed and implemented a structural analysis VR module for an undergraduate senior structural design course in the Department of Civil & Environmental Engineering. Kolb's Experiential Learning Cycle was employed to inform the design of the study from an exploration phase to a guided walkthrough to a post-activity quiz to applying the learned knowledge to a course project. The results of the study demonstrated that students performed better when learning about building system loads in VR first and then learning via traditional 2D building drawings compared to vice versa. Also, students indicated being comfortable during their experience. Overall, the contribution of this research to the body of knowledge is the VR framework for assessment of improved student performance pertaining to the key structural engineering systems concept of load path.
{"title":"Virtual reality as a vehicle for education in the domain of structural building systems","authors":"Casey J. Rodgers , Jacob Henschen , Eric Shaffer , Ann C. Sychterz","doi":"10.1016/j.cexr.2026.100133","DOIUrl":"10.1016/j.cexr.2026.100133","url":null,"abstract":"<div><div>In civil engineering, the ability to translate 2D building drawings to 3D models for structural building system analysis is a critical threshold concept. Virtual Reality (VR) technology may be able to address this challenge as existing studies have shown that VR produces positive learning outcomes and enhanced spatial skills. However, there is a limited amount of research on using VR to improve learning of 3D building structural load paths and analysis as well as civil engineering student comfort levels with VR in an educational setting. Therefore, the goals of this research were to: 1) assess the levels of comfort among students when VR technology is used as an educational tool, and 2) compare how the implementation of VR increases understanding of 3D structural engineering load path problems which are typically only represented in 2D. This research developed and implemented a structural analysis VR module for an undergraduate senior structural design course in the Department of Civil & Environmental Engineering. Kolb's Experiential Learning Cycle was employed to inform the design of the study from an exploration phase to a guided walkthrough to a post-activity quiz to applying the learned knowledge to a course project. The results of the study demonstrated that students performed better when learning about building system loads in VR first and then learning via traditional 2D building drawings compared to vice versa. Also, students indicated being comfortable during their experience. Overall, the contribution of this research to the body of knowledge is the VR framework for assessment of improved student performance pertaining to the key structural engineering systems concept of load path.</div></div>","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"8 ","pages":"Article 100133"},"PeriodicalIF":0.0,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.cexr.2026.100135
Silvio Lang, Tom Pfister, Tobias Kaupp
The integration of collaborative robots into manufacturing has significantly transformed industry dynamics by enhancing the effectiveness and adaptability of production processes. This study investigates the potential of mixed reality (MR) technology to revolutionize employee training through immersive and interactive environments that seamlessly merge digital and physical realities. We introduce an MR training system for human–robot collaborative assembly tasks and evaluate its effectiveness as a training tool. The evaluation applies the following metrics: User Experience, Interaction Effectiveness, Affective-Cognitive Response, Technology Readiness, and Task Completion Time. Sixty-three participants with varied technical backgrounds, experience levels, and job roles completed a collaborative assembly task using the MR system. Our analysis includes (1) the overall effectiveness of the MR system on an absolute scale, and (2) the difference in effectiveness given the participants’ backgrounds. For the latter, participants were split into two MR-Affinity groups. Our results suggest that the MR system can effectively train people to jointly work with robots. Secondly, no significant difference between the two MR-Affinity groups was found except for the Task Completion Time. Both results together indicate that MR-based training for human–robot collaboration is generally useful and applicable to users from varied backgrounds.
{"title":"Training-effectiveness of a mixed reality system for human–robot collaboration in industrial settings","authors":"Silvio Lang, Tom Pfister, Tobias Kaupp","doi":"10.1016/j.cexr.2026.100135","DOIUrl":"10.1016/j.cexr.2026.100135","url":null,"abstract":"<div><div>The integration of collaborative robots into manufacturing has significantly transformed industry dynamics by enhancing the effectiveness and adaptability of production processes. This study investigates the potential of mixed reality (MR) technology to revolutionize employee training through immersive and interactive environments that seamlessly merge digital and physical realities. We introduce an MR training system for human–robot collaborative assembly tasks and evaluate its effectiveness as a training tool. The evaluation applies the following metrics: User Experience, Interaction Effectiveness, Affective-Cognitive Response, Technology Readiness, and Task Completion Time. Sixty-three participants with varied technical backgrounds, experience levels, and job roles completed a collaborative assembly task using the MR system. Our analysis includes (1) the overall effectiveness of the MR system on an absolute scale, and (2) the difference in effectiveness given the participants’ backgrounds. For the latter, participants were split into two MR-Affinity groups. Our results suggest that the MR system can effectively train people to jointly work with robots. Secondly, no significant difference between the two MR-Affinity groups was found except for the Task Completion Time. Both results together indicate that MR-based training for human–robot collaboration is generally useful and applicable to users from varied backgrounds.</div></div>","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"8 ","pages":"Article 100135"},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.cexr.2025.100130
C. Hartmann, C. Kosel, A. Wolf, M. Bannert
Immersive virtual reality (IVR) provides rich visual contexts that can enhance or impede learning, depending on how well environmental details align with instructional content. This experiment systematically examined the effects of environmental detail (between-subject factor: situated 360° scene vs. non-situated image) and content coherence (within-subject factor: text coherent vs. incoherent with visual context) on learning, coherence formation, attention, and motivation. A sample of N = 77 university students explored a virtual replica of the Sistine Chapel in a controlled lab setting. Learning outcomes, spatial presence, enjoyment, and cognitive load (as an exploratory measure) were assessed using questionnaires; coherence formation was specifically analysed using eye-tracking data. Contrary to our hypothesis, incoherent content led to better knowledge acquisition than coherent content, regardless of environmental detail. Situated scenes increased enjoyment and drew attention to environmental features but did not improve learning and even impaired it when content was coherent. Notably, we also observed sequencing effects: learning outcomes for the incoherent content were higher when students first engaged with coherent content, suggesting that early exposure to coherent material may help students better process subsequent incoherent information. Our findings challenge the assumption that immersive features inherently support learning and highlight the importance of considering sequencing and coherence in immersive instructional design. Beyond these empirical findings, the study introduces the concept of mental situation models, grounded in established theories of text comprehension and multimedia learning, as a new perspective to understand our findings and the processes of learning and coherence formation in IVR. This perspective provides a promising starting point for advancing theory building and for identifying the boundary conditions of immersive learning.
{"title":"Seeing the whole picture? An experiment on environmental detail and coherence formation in immersive virtual reality","authors":"C. Hartmann, C. Kosel, A. Wolf, M. Bannert","doi":"10.1016/j.cexr.2025.100130","DOIUrl":"10.1016/j.cexr.2025.100130","url":null,"abstract":"<div><div>Immersive virtual reality (IVR) provides rich visual contexts that can enhance or impede learning, depending on how well environmental details align with instructional content. This experiment systematically examined the effects of environmental detail (between-subject factor: situated 360° scene vs. non-situated image) and content coherence (within-subject factor: text coherent vs. incoherent with visual context) on learning, coherence formation, attention, and motivation. A sample of N = 77 university students explored a virtual replica of the Sistine Chapel in a controlled lab setting. Learning outcomes, spatial presence, enjoyment, and cognitive load (as an exploratory measure) were assessed using questionnaires; coherence formation was specifically analysed using eye-tracking data. Contrary to our hypothesis, incoherent content led to better knowledge acquisition than coherent content, regardless of environmental detail. Situated scenes increased enjoyment and drew attention to environmental features but did not improve learning and even impaired it when content was coherent. Notably, we also observed sequencing effects: learning outcomes for the incoherent content were higher when students first engaged with coherent content, suggesting that early exposure to coherent material may help students better process subsequent incoherent information. Our findings challenge the assumption that immersive features inherently support learning and highlight the importance of considering sequencing and coherence in immersive instructional design. Beyond these empirical findings, the study introduces the concept of mental situation models, grounded in established theories of text comprehension and multimedia learning, as a new perspective to understand our findings and the processes of learning and coherence formation in IVR. This perspective provides a promising starting point for advancing theory building and for identifying the boundary conditions of immersive learning.</div></div>","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"8 ","pages":"Article 100130"},"PeriodicalIF":0.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145939503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1016/j.cexr.2025.100129
Konstantinos Koumaditis, Unnikrishnan Radhakrishnan, Lasse F. Lui, Gitte Pedersen , Francesco Chinello
Much remains unknown about how virtual reality (VR) environments affect working memory (WM), i.e. the retention of a small amount of information in a readily accessible form. While WM is associated with one's information processing capabilities, executive function, comprehension and problem solving – parameters vital for one to act in a VR environment – it is unclear whether it is affected when information is conveyed through virtual interfaces. In this study, using 106 participants, we examine whether WM in a virtual environment is affected when a 3D avatar or 2D video interface delivers verbal or spatial information, or a combination of the two types. The results did not show statistically significant differences between the two conditions for verbal or spatial tasks. However, a non-significant trend was observed in tasks engaging the central executive, suggesting a potential interface-related effect worth further exploration. These findings highlight the importance of future research on multimodal interface design in VR.
{"title":"Virtual reality, working memory and sensory fidelity: A study of 3D avatars versus 2D video interfaces","authors":"Konstantinos Koumaditis, Unnikrishnan Radhakrishnan, Lasse F. Lui, Gitte Pedersen , Francesco Chinello","doi":"10.1016/j.cexr.2025.100129","DOIUrl":"10.1016/j.cexr.2025.100129","url":null,"abstract":"<div><div>Much remains unknown about how virtual reality (VR) environments affect working memory (WM), i.e. the retention of a small amount of information in a readily accessible form. While WM is associated with one's information processing capabilities, executive function, comprehension and problem solving – parameters vital for one to act in a VR environment – it is unclear whether it is affected when information is conveyed through virtual interfaces. In this study, using 106 participants, we examine whether WM in a virtual environment is affected when a 3D avatar or 2D video interface delivers verbal or spatial information, or a combination of the two types. The results did not show statistically significant differences between the two conditions for verbal or spatial tasks. However, a non-significant trend was observed in tasks engaging the central executive, suggesting a potential interface-related effect worth further exploration. These findings highlight the importance of future research on multimodal interface design in VR.</div></div>","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"8 ","pages":"Article 100129"},"PeriodicalIF":0.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145798980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1016/j.cexr.2025.100128
Saurabh Jain , Seunghan Lee , Samuel R. Barber , Young-Jun Son
Technological advances in extended reality (XR: technology incorporating virtual reality (VR), augmented reality (AR), and mixed reality (MR)) have spurred the development of surgical simulators with the goal of immersing the user in the environment. However, there has been only incremental progress in developing surgical proficiency assessments that are critical in utilizing this technology to train safer and more efficient surgeons. We hypothesize that baseline measurements of expert surgeons (ground truth) and comparisons between users of different proficiency levels can be analyzed to develop an accurate proficiency classification of trainees on a VR-surgical simulator. The foundation of this work is based on our functional endoscopic sinus simulator (FESS) which incorporates a hierarchical task-based analysis of sinus surgery with motion-tracking of the endoscope and surgical instruments. We utilized dynamic time warping (DTW) to combine motion-tracking and operative time of sinus surgery experts to estimate a ground truth, developed a classification network combining motion-tracking and operative time using Decision Tree C5.0, and validated the accuracy of the proficiency classification by conducting extensive experiments with novice (n = 28) and expert (n = 24) users. The proposed work is critical to provide personalized and directed feedback to efficiently train surgeons utilizing simulation.
扩展现实(XR:结合虚拟现实(VR)、增强现实(AR)和混合现实(MR)的技术)的技术进步刺激了手术模拟器的发展,目标是让用户沉浸在环境中。然而,在开发手术熟练程度评估方面只有渐进的进展,这对于利用这项技术培训更安全、更高效的外科医生至关重要。我们假设可以分析专家外科医生的基线测量(基础真实值)和不同熟练程度用户之间的比较,从而在vr手术模拟器上对受训人员进行准确的熟练程度分类。这项工作的基础是基于我们的功能性内窥镜鼻窦模拟器(FESS),它结合了基于任务的鼻窦手术分层分析和内窥镜和手术器械的运动跟踪。我们利用动态时间弯曲(dynamic time warping, DTW)将鼻窦外科专家的运动跟踪和手术时间结合起来估计一个ground truth,利用Decision Tree C5.0开发了运动跟踪和手术时间相结合的分类网络,并通过新手(n = 28)和专家(n = 24)用户进行了大量实验,验证了熟练度分类的准确性。所提出的工作对于提供个性化和定向的反馈以有效地利用模拟培训外科医生至关重要。
{"title":"Surgical proficiency assessment using virtual reality (VR)-based hybrid simulation for minimally invasive procedures","authors":"Saurabh Jain , Seunghan Lee , Samuel R. Barber , Young-Jun Son","doi":"10.1016/j.cexr.2025.100128","DOIUrl":"10.1016/j.cexr.2025.100128","url":null,"abstract":"<div><div>Technological advances in extended reality (XR: technology incorporating virtual reality (VR), augmented reality (AR), and mixed reality (MR)) have spurred the development of surgical simulators with the goal of immersing the user in the environment. However, there has been only incremental progress in developing surgical proficiency assessments that are critical in utilizing this technology to train safer and more efficient surgeons. We hypothesize that baseline measurements of expert surgeons (ground truth) and comparisons between users of different proficiency levels can be analyzed to develop an accurate proficiency classification of trainees on a VR-surgical simulator. The foundation of this work is based on our functional endoscopic sinus simulator (FESS) which incorporates a hierarchical task-based analysis of sinus surgery with motion-tracking of the endoscope and surgical instruments. We utilized dynamic time warping (DTW) to combine motion-tracking and operative time of sinus surgery experts to estimate a ground truth, developed a classification network combining motion-tracking and operative time using Decision Tree C5.0, and validated the accuracy of the proficiency classification by conducting extensive experiments with novice (n = 28) and expert (n = 24) users. The proposed work is critical to provide personalized and directed feedback to efficiently train surgeons utilizing simulation.</div></div>","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"8 ","pages":"Article 100128"},"PeriodicalIF":0.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145798556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12DOI: 10.1016/j.cexr.2025.100127
Ottaviano Emma
<div><div>This literature review explores the transformative potential of Extended Reality (XR) - encompassing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) - when integrated with Artificial Intelligence (AI) and multimodal interfaces in learning environments for extended education. Through a systematic analysis of 72 peer-reviewed studies (2014–2024), the review identifies five main application areas: professional training, STEM education, soft skills development, general education, and assessment; as well as four pedagogical goals aimed at enhancing learning outcomes, increasing engagement and motivation, ensuring accessibility and safety, and fostering 21st-century skills.</div><div>The findings indicate that XR technologies offer significant benefits, with moderate to high effects on educational outcomes, particularly in experiential contexts. However, their effectiveness is not intrinsic but critically depends on thoughtful pedagogical design, as uncalibrated immersion may increase cognitive load. The review also highlights key challenges such as methodological standardization, geographical disparities in research, the lack of longitudinal studies, and ethical concerns regarding privacy and algorithmic bias.</div><div>It concludes that fully realizing the potential of XR requires an interdisciplinary and human-centered approach that prioritizes pedagogical integration over technology-driven innovation alone, while simultaneously promoting equity, inclusivity, and an ethical and sustainable adoption.</div></div><div><h3>Background</h3><div>The rapid advancement of Extended Reality (XR), Artificial Intelligence (AI), and Big Data technologies is transforming learning environments, enabling the creation of immersive, personalized, and data-driven educational contexts beyond traditional classrooms. These innovations play a crucial role in supporting lifelong and extended education, a need made even more urgent by the COVID-19 pandemic, which accelerated the search for more effective and engaging distance learning tools.</div></div><div><h3>Objective</h3><div>This literature review aims to synthesize research conducted between 2014 and 2024 on technology-enhanced learning environments, with a specific focus on how XR, AI, and Big Data strengthen extended education. The goal is to identify the pedagogical opportunities and challenges associated with their integration, categorize the most advanced applications, and assess their level of maturity and adoption in academic and professional contexts.</div></div><div><h3>Methods</h3><div>A systematic review was conducted following PRISMA guidelines, querying academic databases such as Scopus, IEEE Xplore, and ScienceDirect. The search included peer-reviewed articles published in English between 2014 and 2024, focusing on applications of AR, VR, AI, and Big Data in extended education contexts. Specific inclusion and exclusion criteria were applied, and the results were analyzed thr
{"title":"Extended reality in the digital age: A literature review on technology-driven learning environments","authors":"Ottaviano Emma","doi":"10.1016/j.cexr.2025.100127","DOIUrl":"10.1016/j.cexr.2025.100127","url":null,"abstract":"<div><div>This literature review explores the transformative potential of Extended Reality (XR) - encompassing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) - when integrated with Artificial Intelligence (AI) and multimodal interfaces in learning environments for extended education. Through a systematic analysis of 72 peer-reviewed studies (2014–2024), the review identifies five main application areas: professional training, STEM education, soft skills development, general education, and assessment; as well as four pedagogical goals aimed at enhancing learning outcomes, increasing engagement and motivation, ensuring accessibility and safety, and fostering 21st-century skills.</div><div>The findings indicate that XR technologies offer significant benefits, with moderate to high effects on educational outcomes, particularly in experiential contexts. However, their effectiveness is not intrinsic but critically depends on thoughtful pedagogical design, as uncalibrated immersion may increase cognitive load. The review also highlights key challenges such as methodological standardization, geographical disparities in research, the lack of longitudinal studies, and ethical concerns regarding privacy and algorithmic bias.</div><div>It concludes that fully realizing the potential of XR requires an interdisciplinary and human-centered approach that prioritizes pedagogical integration over technology-driven innovation alone, while simultaneously promoting equity, inclusivity, and an ethical and sustainable adoption.</div></div><div><h3>Background</h3><div>The rapid advancement of Extended Reality (XR), Artificial Intelligence (AI), and Big Data technologies is transforming learning environments, enabling the creation of immersive, personalized, and data-driven educational contexts beyond traditional classrooms. These innovations play a crucial role in supporting lifelong and extended education, a need made even more urgent by the COVID-19 pandemic, which accelerated the search for more effective and engaging distance learning tools.</div></div><div><h3>Objective</h3><div>This literature review aims to synthesize research conducted between 2014 and 2024 on technology-enhanced learning environments, with a specific focus on how XR, AI, and Big Data strengthen extended education. The goal is to identify the pedagogical opportunities and challenges associated with their integration, categorize the most advanced applications, and assess their level of maturity and adoption in academic and professional contexts.</div></div><div><h3>Methods</h3><div>A systematic review was conducted following PRISMA guidelines, querying academic databases such as Scopus, IEEE Xplore, and ScienceDirect. The search included peer-reviewed articles published in English between 2014 and 2024, focusing on applications of AR, VR, AI, and Big Data in extended education contexts. Specific inclusion and exclusion criteria were applied, and the results were analyzed thr","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"8 ","pages":"Article 100127"},"PeriodicalIF":0.0,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145750082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1016/j.cexr.2025.100126
Yohan Hwang , Hyejin Lee
This research aims to examine the roles of virtual reality (VR) in high-stakes language testing. To this end, 23 pre-service EFL teachers used a user-friendly VR creation tool, Delightex Edu, to transform Test of English for International Communication (TOEIC) questions into a VR-contextualized testing prototype, as well as providing their perspectives on its potential and application. The data analysis focused on exploring the key features of the prototype based on the communicative language teaching (CLT) principles, and reflected on the findings using word-concordance network analysis. The thematic analysis of the testing content reveals that integrating VR with non-player characters (NPCs) adds a new level of authenticity to language use and skills integration for multimodal language learning. The keyword analysis shows that VR can stimulate test-takers’ interest and engagement by immersing them in authentic environments and providing various contextual cues. However, there are some concerns about technological support and infrastructural issues, such as accessibility and prolonged exposure in virtual environments, in addition to motion sickness and fatigue. There are also practical concerns regarding the effort and time required for creating such testing items. Based on these findings, this study suggests the implications for language teachers and stakeholders to make changes in language testing.
{"title":"An exploratory study on the development of pre-service English teachers’ VR-contextualized testing prototype","authors":"Yohan Hwang , Hyejin Lee","doi":"10.1016/j.cexr.2025.100126","DOIUrl":"10.1016/j.cexr.2025.100126","url":null,"abstract":"<div><div>This research aims to examine the roles of virtual reality (VR) in high-stakes language testing. To this end, 23 pre-service EFL teachers used a user-friendly VR creation tool, Delightex Edu, to transform Test of English for International Communication (TOEIC) questions into a VR-contextualized testing prototype, as well as providing their perspectives on its potential and application. The data analysis focused on exploring the key features of the prototype based on the communicative language teaching (CLT) principles, and reflected on the findings using word-concordance network analysis. The thematic analysis of the testing content reveals that integrating VR with non-player characters (NPCs) adds a new level of authenticity to language use and skills integration for multimodal language learning. The keyword analysis shows that VR can stimulate test-takers’ interest and engagement by immersing them in authentic environments and providing various contextual cues. However, there are some concerns about technological support and infrastructural issues, such as accessibility and prolonged exposure in virtual environments, in addition to motion sickness and fatigue. There are also practical concerns regarding the effort and time required for creating such testing items. Based on these findings, this study suggests the implications for language teachers and stakeholders to make changes in language testing.</div></div>","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"8 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The metaverse is an interconnected network of virtual environments that facilitates immersive three-dimensional experiences through the integration of Internet of Things, virtual reality (VR), blockchain, and augmented reality (AR). As the application of metaverse is expanding in multiple domains, including education, it becomes essential to explore the potential impact within this context. Accordingly, the purpose of the study is to explore the adoption of AI-driven metaverse in the education system by utilizing the Technology Acceptance Model (TAM), Hedonic Consumption Behaviour Theory (HCBT) and six other factors derived from the literature.
Method
The structured questionnaire was used to collect data from 415 valid metaverse users through snowball sampling technique and analyzed using the “Partial Least Square Structural Equation Modelling (PLS-SEM)” in Smart PLS4.
Results
The research validates the importance of TAM, HCBT, and other identified factors in comprehending the metaverse adoption. The result of the study reveals that all the hypothesized relationships has been supported except perceived trialibility and perceived cyber risk affecting intention to adopt metaverse in education system (IMES). Additionally, the moderating role of social innovativeness has been found significant in influencing the relationship between attitude and IMES.
Implications
This study identifies significant factors influencing IMES, such as perceived ease of use, perceived usefulness, and emotional involvement. These factors serve as reference points for industry stakeholders, service providers, and policymakers to develop more immersive and engaging metaverse platforms for virtual learning environments that foster interactive, collaborative, and pedagogically effective educational experiences, ultimately enhancing learning outcomes for both students and teachers.
{"title":"Unlocking learning potentials: Understanding user intentions in AI-fueled metaverse education","authors":"Sanjay Dhingra , Abhishek Gupta , Kanika Chaudhry , Anil Kumar , Himanshu Falwadiya","doi":"10.1016/j.cexr.2025.100131","DOIUrl":"10.1016/j.cexr.2025.100131","url":null,"abstract":"<div><h3>Purpose</h3><div>The metaverse is an interconnected network of virtual environments that facilitates immersive three-dimensional experiences through the integration of Internet of Things, virtual reality (VR), blockchain, and augmented reality (AR). As the application of metaverse is expanding in multiple domains, including education, it becomes essential to explore the potential impact within this context. Accordingly, the purpose of the study is to explore the adoption of AI-driven metaverse in the education system by utilizing the Technology Acceptance Model (TAM), Hedonic Consumption Behaviour Theory (HCBT) and six other factors derived from the literature.</div></div><div><h3>Method</h3><div>The structured questionnaire was used to collect data from 415 valid metaverse users through snowball sampling technique and analyzed using the “Partial Least Square Structural Equation Modelling (PLS-SEM)” in Smart PLS4.</div></div><div><h3>Results</h3><div>The research validates the importance of TAM, HCBT, and other identified factors in comprehending the metaverse adoption. The result of the study reveals that all the hypothesized relationships has been supported except perceived trialibility and perceived cyber risk affecting intention to adopt metaverse in education system (IMES). Additionally, the moderating role of social innovativeness has been found significant in influencing the relationship between attitude and IMES.</div></div><div><h3>Implications</h3><div>This study identifies significant factors influencing IMES, such as perceived ease of use, perceived usefulness, and emotional involvement. These factors serve as reference points for industry stakeholders, service providers, and policymakers to develop more immersive and engaging metaverse platforms for virtual learning environments that foster interactive, collaborative, and pedagogically effective educational experiences, ultimately enhancing learning outcomes for both students and teachers.</div></div>","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"8 ","pages":"Article 100131"},"PeriodicalIF":0.0,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.cexr.2025.100125
Matthias Wolf , Patrick Herstätter , Marvin Rantschl , Christian Ramsauer , Alejandra J. Magana
Modern manufacturing systems have become increasingly complex and require a skilled workforce, demanding innovative educational approaches to bridge theoretical knowledge and practical skills. This VR approach addresses STEM education challenges by fostering experiential, competency-based learning by investigating how immersive VR can enhance learner engagement, motivation, and learning in engineering education settings. We propose the use of learning factories as an approach to close the gap between theoretical knowledge and practical skills. Immersive Learning factories integrate Virtual Reality (VR) technologies as an experiential learning approach to support engineering education and manufacturing workforce development. Based on the LEAD Factory at Graz University of Technology, a fully immersive VR environment was developed to simulate real-world assembly and layout planning tasks. This virtual learning factory includes multiple layout configurations, interactive tools, and a “Build Your Own Factory” feature for custom workflow design, providing a specific training case. This descriptive study involving 30 participants evaluated the system's effectiveness across four key domains: creativity in layout design, usability, learning motivation, and cognitive load. Results show that participants (67 %) were able to deliver high-quality functional designs and reported high usability scores (mean SUS = 83.9), strong intrinsic motivation (mean interest = 5.9/7), and low average cognitive load (mean NASA-TLX = 4.44/20). In conclusion, participants were able to independently design functional layouts, engage meaningfully with the tasks, and complete them with minimal frustration. These findings not only suggest that VR-immersed learning factories are an effective, scalable, and engaging platform for industrial education and skills development but also provide valuable evidence on how immersive environments can foster motivation and reduce cognitive load in technical engineering education.
{"title":"Immersive learning factories for promoting experiential manufacturing education and STEM competency development","authors":"Matthias Wolf , Patrick Herstätter , Marvin Rantschl , Christian Ramsauer , Alejandra J. Magana","doi":"10.1016/j.cexr.2025.100125","DOIUrl":"10.1016/j.cexr.2025.100125","url":null,"abstract":"<div><div>Modern manufacturing systems have become increasingly complex and require a skilled workforce, demanding innovative educational approaches to bridge theoretical knowledge and practical skills. This VR approach addresses STEM education challenges by fostering experiential, competency-based learning by investigating how immersive VR can enhance learner engagement, motivation, and learning in engineering education settings. We propose the use of learning factories as an approach to close the gap between theoretical knowledge and practical skills. Immersive Learning factories integrate Virtual Reality (VR) technologies as an experiential learning approach to support engineering education and manufacturing workforce development. Based on the LEAD Factory at Graz University of Technology, a fully immersive VR environment was developed to simulate real-world assembly and layout planning tasks. This virtual learning factory includes multiple layout configurations, interactive tools, and a “Build Your Own Factory” feature for custom workflow design, providing a specific training case. This descriptive study involving 30 participants evaluated the system's effectiveness across four key domains: creativity in layout design, usability, learning motivation, and cognitive load. Results show that participants (67 %) were able to deliver high-quality functional designs and reported high usability scores (mean SUS = 83.9), strong intrinsic motivation (mean interest = 5.9/7), and low average cognitive load (mean NASA-TLX = 4.44/20). In conclusion, participants were able to independently design functional layouts, engage meaningfully with the tasks, and complete them with minimal frustration. These findings not only suggest that VR-immersed learning factories are an effective, scalable, and engaging platform for industrial education and skills development but also provide valuable evidence on how immersive environments can foster motivation and reduce cognitive load in technical engineering education.</div></div>","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"7 ","pages":"Article 100125"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}