Pub Date : 2024-03-06DOI: 10.1007/s10055-024-00953-w
Mariano Banquiero, Gracia Valdeolivas, David Ramón, M.-Carmen Juan
This work presents the development of a mixed reality (MR) application that uses color Passthrough for learning to play the piano. A study was carried out to compare the interpretation outcomes of the participants and their subjective experience when using the MR application developed to learn to play the piano with a system that used Synthesia (N = 33). The results show that the MR application and Synthesia were effective in learning piano. However, the students played the pieces significantly better when using the MR application. The two applications both provided a satisfying user experience. However, the subjective experience of the students was better when they used the MR application. Other conclusions derived from the study include the following: (1) The outcomes of the students and their subjective opinion about the experience when using the MR application were independent of age and gender; (2) the sense of presence offered by the MR application was high (above 6 on a scale of 1 to 7); (3) the adverse effects induced by wearing the Meta Quest Pro and using our MR application were negligible; and (4) the students showed their preference for the MR application. As a conclusion, the advantage of our MR application compared to other types of applications (e.g., non-projected piano roll notation) is that the user has a direct view of the piano and the help elements appear integrated in the user’s view. The user does not have to take their eyes off the keyboard and is focused on playing the piano.
{"title":"A color Passthrough mixed reality application for learning piano","authors":"Mariano Banquiero, Gracia Valdeolivas, David Ramón, M.-Carmen Juan","doi":"10.1007/s10055-024-00953-w","DOIUrl":"https://doi.org/10.1007/s10055-024-00953-w","url":null,"abstract":"<p>This work presents the development of a mixed reality (MR) application that uses color Passthrough for learning to play the piano. A study was carried out to compare the interpretation outcomes of the participants and their subjective experience when using the MR application developed to learn to play the piano with a system that used Synthesia (<i>N</i> = 33). The results show that the MR application and Synthesia were effective in learning piano. However, the students played the pieces significantly better when using the MR application. The two applications both provided a satisfying user experience. However, the subjective experience of the students was better when they used the MR application. Other conclusions derived from the study include the following: (1) The outcomes of the students and their subjective opinion about the experience when using the MR application were independent of age and gender; (2) the sense of presence offered by the MR application was high (above 6 on a scale of 1 to 7); (3) the adverse effects induced by wearing the Meta Quest Pro and using our MR application were negligible; and (4) the students showed their preference for the MR application. As a conclusion, the advantage of our MR application compared to other types of applications (e.g., non-projected piano roll notation) is that the user has a direct view of the piano and the help elements appear integrated in the user’s view. The user does not have to take their eyes off the keyboard and is focused on playing the piano.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"82 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140055823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-05DOI: 10.1007/s10055-024-00965-6
Hector Tovanche-Picon, Javier González-Trejo, Ángel Flores-Abad, Miguel Ángel García-Terán, Diego Mercado-Ravell
Safe autonomous landing for Unmanned Aerial Vehicles (UAVs) in populated areas is a crucial aspect for successful integration of UAVs in populated environments. Nonetheless, validating autonomous landing in real scenarios is a challenging task with a high risk of injuring people. In this work, we propose a framework for safe real-time and thorough evaluation of vision-based autonomous landing in populated scenarios, using photo-realistic virtual environments and physics-based simulation. The proposed evaluation pipeline includes the use of Unreal graphics engine coupled with AirSim for realistic drone simulation to evaluate landing strategies. Then, Software-/Hardware-In-The-Loop can be used to test beforehand the performance of the algorithms. The final validation stage consists in a Robot-In-The-Loop evaluation strategy where a real drone must perform autonomous landing maneuvers in real-time, with an avatar drone in a virtual environment mimicking its behavior, while the detection algorithms run in the virtual environment (virtual reality to the robot). This method determines the safe landing areas based on computer vision and convolutional neural networks to avoid colliding with people in static and dynamic scenarios. To test the robustness of the algorithms in adversary conditions, different urban-like environments were implemented, including moving agents and different weather conditions. We also propose different metrics to quantify the performance of the landing strategies, establishing a baseline for comparison with future works on this challenging task, and analyze them through several randomized iterations. The proposed approach allowed us to safely validate the autonomous landing strategies, providing an evaluation pipeline, and a benchmark for comparison. An extensive evaluation showed a 99% success rate in static scenarios and 87% in dynamic cases, demonstrating that the use of autonomous landing algorithms considerably prevents accidents involving humans, facilitating the integration of drones in human-populated spaces, which may help to unleash the full potential of drones in urban environments. Besides, this type of development helps to increase the safety of drone operations, which would advance drone flight regulations and allow their use in closer proximity to humans.
{"title":"Real-time safe validation of autonomous landing in populated areas: from virtual environments to Robot-In-The-Loop","authors":"Hector Tovanche-Picon, Javier González-Trejo, Ángel Flores-Abad, Miguel Ángel García-Terán, Diego Mercado-Ravell","doi":"10.1007/s10055-024-00965-6","DOIUrl":"https://doi.org/10.1007/s10055-024-00965-6","url":null,"abstract":"<p>Safe autonomous landing for Unmanned Aerial Vehicles (UAVs) in populated areas is a crucial aspect for successful integration of UAVs in populated environments. Nonetheless, validating autonomous landing in real scenarios is a challenging task with a high risk of injuring people. In this work, we propose a framework for safe real-time and thorough evaluation of vision-based autonomous landing in populated scenarios, using photo-realistic virtual environments and physics-based simulation. The proposed evaluation pipeline includes the use of Unreal graphics engine coupled with AirSim for realistic drone simulation to evaluate landing strategies. Then, Software-/Hardware-In-The-Loop can be used to test beforehand the performance of the algorithms. The final validation stage consists in a Robot-In-The-Loop evaluation strategy where a real drone must perform autonomous landing maneuvers in real-time, with an avatar drone in a virtual environment mimicking its behavior, while the detection algorithms run in the virtual environment (virtual reality to the robot). This method determines the safe landing areas based on computer vision and convolutional neural networks to avoid colliding with people in static and dynamic scenarios. To test the robustness of the algorithms in adversary conditions, different urban-like environments were implemented, including moving agents and different weather conditions. We also propose different metrics to quantify the performance of the landing strategies, establishing a baseline for comparison with future works on this challenging task, and analyze them through several randomized iterations. The proposed approach allowed us to safely validate the autonomous landing strategies, providing an evaluation pipeline, and a benchmark for comparison. An extensive evaluation showed a 99% success rate in static scenarios and 87% in dynamic cases, demonstrating that the use of autonomous landing algorithms considerably prevents accidents involving humans, facilitating the integration of drones in human-populated spaces, which may help to unleash the full potential of drones in urban environments. Besides, this type of development helps to increase the safety of drone operations, which would advance drone flight regulations and allow their use in closer proximity to humans.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"25 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140036374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-05DOI: 10.1007/s10055-024-00962-9
Jesus Mayor, Pablo Calleja, Felix Fuentes-Hurtado
Nowadays, there is still a challenge in virtual reality to obtain an accurate displacement prediction of the user. This could be a future key element to apply in the so-called redirected walking methods. Meanwhile, deep learning provides us with new tools to reach greater achievements in this type of prediction. Specifically, long short-term memory recurrent neural networks obtained promising results recently. This gives us clues to continue researching in this line to predict virtual reality user’s displacement. This manuscript focuses on the collection of positional data and a subsequent new way to train a deep learning model to obtain more accurate predictions. The data were collected with 44 participants and it has been analyzed with different existing prediction algorithms. The best results were obtained with a new idea, the use of rotation quaternions and the three dimensions to train the previously existing models. The authors strongly believe that there is still much room for improvement in this research area by means of the usage of new deep learning models.
{"title":"Long short-term memory prediction of user’s locomotion in virtual reality","authors":"Jesus Mayor, Pablo Calleja, Felix Fuentes-Hurtado","doi":"10.1007/s10055-024-00962-9","DOIUrl":"https://doi.org/10.1007/s10055-024-00962-9","url":null,"abstract":"<p>Nowadays, there is still a challenge in virtual reality to obtain an accurate displacement prediction of the user. This could be a future key element to apply in the so-called redirected walking methods. Meanwhile, deep learning provides us with new tools to reach greater achievements in this type of prediction. Specifically, long short-term memory recurrent neural networks obtained promising results recently. This gives us clues to continue researching in this line to predict virtual reality user’s displacement. This manuscript focuses on the collection of positional data and a subsequent new way to train a deep learning model to obtain more accurate predictions. The data were collected with 44 participants and it has been analyzed with different existing prediction algorithms. The best results were obtained with a new idea, the use of rotation quaternions and the three dimensions to train the previously existing models. The authors strongly believe that there is still much room for improvement in this research area by means of the usage of new deep learning models.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"10 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140036289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.1007/s10055-024-00954-9
Abstract
Virtual reality (VR) rehabilitation has been proven to be a very promising method to increase the focus and attention of patients by immersing them in a virtual world, and through that, improve the effectiveness of the rehabilitation. One of the biggest challenges in designing VR Rehabilitation exercises is in choosing feedback strategies that guide the patient and give the appropriate success/failure indicators, without breaking their sense of immersion. A new strategy for feedback is proposed, using non-photorealistic rendering (NPR) to highlight important parts of the exercise the patient needs to focus on and fade out parts of the scene that are not relevant. This strategy is implemented into an authoring tool that allows rehabilitators specifying feedback strategies while creating exercise profiles. The NPR feedback can be configured in many ways, using different NPR schemes for different layers of the exercise environment such as the background environment, the non-interactive exercise objects, and the interactive exercise objects. The main features of the system including the support for universal render pipeline, camera stacking, and stereoscopic rendering are evaluated in a testing scenario. Performance tests regarding memory usage and supported frames per second are also considered. In addition, a group of rehabilitators evaluated the system usability. The proposed system meets all the requirements to apply NPR effect in VR scenarios and solves all the limitations with regard to technical function and image quality. In addition, the system performance has been shown to meet the targets for low-cost hardware. Regarding authoring tool usability rehabilitators agree that is easy to use and a valuable tool for rehabilitation scenarios. NPR schemes can be integrated into VR rehabilitation scenarios achieving the same image quality as non-VR visualizations with only a small impact on the frame rate. NPR schemes are a good visual feedback alternative.
{"title":"Non-photorealistic rendering as a feedback strategy in virtual reality for rehabilitation","authors":"","doi":"10.1007/s10055-024-00954-9","DOIUrl":"https://doi.org/10.1007/s10055-024-00954-9","url":null,"abstract":"<h3>Abstract</h3> <p>Virtual reality (VR) rehabilitation has been proven to be a very promising method to increase the focus and attention of patients by immersing them in a virtual world, and through that, improve the effectiveness of the rehabilitation. One of the biggest challenges in designing VR Rehabilitation exercises is in choosing feedback strategies that guide the patient and give the appropriate success/failure indicators, without breaking their sense of immersion. A new strategy for feedback is proposed, using non-photorealistic rendering (NPR) to highlight important parts of the exercise the patient needs to focus on and fade out parts of the scene that are not relevant. This strategy is implemented into an authoring tool that allows rehabilitators specifying feedback strategies while creating exercise profiles. The NPR feedback can be configured in many ways, using different NPR schemes for different layers of the exercise environment such as the background environment, the non-interactive exercise objects, and the interactive exercise objects. The main features of the system including the support for universal render pipeline, camera stacking, and stereoscopic rendering are evaluated in a testing scenario. Performance tests regarding memory usage and supported frames per second are also considered. In addition, a group of rehabilitators evaluated the system usability. The proposed system meets all the requirements to apply NPR effect in VR scenarios and solves all the limitations with regard to technical function and image quality. In addition, the system performance has been shown to meet the targets for low-cost hardware. Regarding authoring tool usability rehabilitators agree that is easy to use and a valuable tool for rehabilitation scenarios. NPR schemes can be integrated into VR rehabilitation scenarios achieving the same image quality as non-VR visualizations with only a small impact on the frame rate. NPR schemes are a good visual feedback alternative.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"38 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140025740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.1007/s10055-024-00949-6
Maxime Cauz, Antoine Clarinval, Bruno Dumas
Augmented reality (AR) is making its way into many sectors. Its rapid evolution in recent years has led to the development of prototypes demonstrating its effectiveness. However, to be able to push these prototypes to the scale of fully usable applications, it is important to ensure the readability of the texts they include. To this end, we conducted a multivocal literature review (MLR) to determine the text parameters a designer can tune, as well as the contextual constraints they need to pay attention to, in relation to Optical See-Through (OST) and Video See-Through (VST) displays. We also included guidelines from device manufacturing and game engines sites to compare the current state of research in the academic and industrial worlds. The results show that parameters pertaining more to letter legibility have been extensively studied (e.g., color and size), while those pertaining to the whole text still require further research (e.g., alignment or space between lines). The former group of parameters, and their associated constraints, were assembled in the form of two decision trees to facilitate implementation of AR applications. Finally, we also concluded that there was a lack of alignment between academic and industrial recommendations.
增强现实技术(AR)正在进入许多领域。近年来,增强现实技术发展迅速,已开发出一些原型产品,证明了其有效性。然而,要将这些原型推向完全可用的应用规模,必须确保其中包含的文本的可读性。为此,我们进行了一次多声部文献综述(MLR),以确定设计者可以调整的文本参数,以及他们需要注意的与光学透视(OST)和视频透视(VST)显示相关的上下文限制。我们还纳入了来自设备制造和游戏引擎网站的指南,以比较学术界和工业界的研究现状。结果显示,与字母可读性相关的参数(如颜色和大小)已得到广泛研究,而与整个文本相关的参数(如对齐方式或行间距)仍需进一步研究。我们将前一组参数及其相关约束条件以两棵决策树的形式组合在一起,以方便 AR 应用的实施。最后,我们还得出结论,学术界和工业界的建议缺乏一致性。
{"title":"Text readability in augmented reality: a multivocal literature review","authors":"Maxime Cauz, Antoine Clarinval, Bruno Dumas","doi":"10.1007/s10055-024-00949-6","DOIUrl":"https://doi.org/10.1007/s10055-024-00949-6","url":null,"abstract":"<p>Augmented reality (AR) is making its way into many sectors. Its rapid evolution in recent years has led to the development of prototypes demonstrating its effectiveness. However, to be able to push these prototypes to the scale of fully usable applications, it is important to ensure the readability of the texts they include. To this end, we conducted a multivocal literature review (MLR) to determine the text parameters a designer can tune, as well as the contextual constraints they need to pay attention to, in relation to Optical See-Through (OST) and Video See-Through (VST) displays. We also included guidelines from device manufacturing and game engines sites to compare the current state of research in the academic and industrial worlds. The results show that parameters pertaining more to letter legibility have been extensively studied (e.g., color and size), while those pertaining to the whole text still require further research (e.g., alignment or space between lines). The former group of parameters, and their associated constraints, were assembled in the form of two decision trees to facilitate implementation of AR applications. Finally, we also concluded that there was a lack of alignment between academic and industrial recommendations.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"129 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140025291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.1007/s10055-024-00981-6
Abstract
Objectives: To assess the potential of immersive virtual reality (IVR) in achieving moderate exercise intensity, and 2) to examine the acute effects of two IVR exergame sessions (BOXVR and Beat Saber), comparing them with the impact of traditional exercise on heart rate variability (HRV), perceived effort, delayed onset muscle soreness, motivation, and sleep. Materials and methods: A crossover design was used. The participants (n = 22) randomly performed two sessions of IVR and one session of moderate intensity physical activity, each session lasting 30 min. Heart Rate (HR) and HRV, Perceived Exertion Scale, Intrinsic Motivation Inventory, sleep quality, and perceived pain, were evaluated. Results: The cardiac response to the activities was significantly higher when participants performed traditional physical activity as compared to the BOXVR and Beat Saber games. Traditional training provided a different HRV response as compared to Beat Saber (LnRMSSD, p = 0.025; SDNN, p = 0.031). Although the sessions were planned for moderate intensity, BOXVR generated a moderate intensity (49.3% HRreserve), Beat Saber (29.6% HRreserve) a light one, and the Circuit session, a vigorous one (62.9% HRreserve). In addition, traditional training reported higher perceived exertion and pain with less enjoyment. Differences were observed between the exergames. BOXVR resulted in a lower cardiac response (HRmax and HRmean), and a higher perception of exertion and pain at 72 h. The sleep variables analyzed were not altered by any of the sessions. Conclusions: BOXVR and traditional training can lead to moderate intensity physical activity. However, traditional training could result in lower adherence to physical exercise programs, as it was perceived as more intense and less enjoyable.
{"title":"Impact of immersive virtual reality games or traditional physical exercise on cardiovascular and autonomic responses, enjoyment and sleep quality: a randomized crossover study","authors":"","doi":"10.1007/s10055-024-00981-6","DOIUrl":"https://doi.org/10.1007/s10055-024-00981-6","url":null,"abstract":"<h3>Abstract</h3> <p>Objectives: To assess the potential of immersive virtual reality (IVR) in achieving moderate exercise intensity, and 2) to examine the acute effects of two IVR exergame sessions (BOXVR and Beat Saber), comparing them with the impact of traditional exercise on heart rate variability (HRV), perceived effort, delayed onset muscle soreness, motivation, and sleep. Materials and methods: A crossover design was used. The participants (<em>n</em> = 22) randomly performed two sessions of IVR and one session of moderate intensity physical activity, each session lasting 30 min. Heart Rate (HR) and HRV, Perceived Exertion Scale, Intrinsic Motivation Inventory, sleep quality, and perceived pain, were evaluated. Results: The cardiac response to the activities was significantly higher when participants performed traditional physical activity as compared to the BOXVR and Beat Saber games. Traditional training provided a different HRV response as compared to Beat Saber (LnRMSSD, <em>p</em> = 0.025; SDNN, <em>p</em> = 0.031). Although the sessions were planned for moderate intensity, BOXVR generated a moderate intensity (49.3% HRreserve), Beat Saber (29.6% HRreserve) a light one, and the Circuit session, a vigorous one (62.9% HRreserve). In addition, traditional training reported higher perceived exertion and pain with less enjoyment. Differences were observed between the exergames. BOXVR resulted in a lower cardiac response (HRmax and HRmean), and a higher perception of exertion and pain at 72 h. The sleep variables analyzed were not altered by any of the sessions. Conclusions: BOXVR and traditional training can lead to moderate intensity physical activity. However, traditional training could result in lower adherence to physical exercise programs, as it was perceived as more intense and less enjoyable.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"47 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140025526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.1007/s10055-024-00975-4
Lucas Paulsen, Susanne Dau, Jacob Davidsen
Immersive learning technologies such as virtual reality have long been deemed as the next generation of digital learning environments. There is a limited number of studies addressing how immersive technologies can be designed, applied, and studied in collaborative learning settings. This paper presents a systematic review of empirical studies reporting on use of immersive virtual reality in collaborative learning within educational and professional learning settings. 11 studies have been grouped and coded in a textual narrative synthesis, outlining the pedagogical concepts behind the learning design, as well as the design of virtual reality environments and the collaborative learning activities in which the technology is employed. The results suggest that collaborative learning in virtual reality can currently be conceptualised as a shared experience in an immersive, virtually mediated space, where there is a shared goal/problem which learners must attend to collaboratively. This conceptualisation implies a need to design technologies, environments, and activities that support participation and social interaction, fostering collaborative learning processes. Based on the outlined conceptualisation, we present a series of recommendations for designing for collaborative learning in immersive virtual reality. The paper concludes that collaborative learning in virtual reality creates a practice- and reflection space, where learning is perceived as engaging, without the risk of interfering with actual practices. Current designs however struggle with usability, realism, and facilitating social interaction. The paper further identifies a need for future research into what happens within virtual reality, rather than only looking at post-virtual reality evaluations.
{"title":"Designing for collaborative learning in immersive virtual reality: a systematic literature review","authors":"Lucas Paulsen, Susanne Dau, Jacob Davidsen","doi":"10.1007/s10055-024-00975-4","DOIUrl":"https://doi.org/10.1007/s10055-024-00975-4","url":null,"abstract":"<p>Immersive learning technologies such as virtual reality have long been deemed as the next generation of digital learning environments. There is a limited number of studies addressing how immersive technologies can be designed, applied, and studied in collaborative learning settings. This paper presents a systematic review of empirical studies reporting on use of immersive virtual reality in collaborative learning within educational and professional learning settings. 11 studies have been grouped and coded in a textual narrative synthesis, outlining the pedagogical concepts behind the learning design, as well as the design of virtual reality environments and the collaborative learning activities in which the technology is employed. The results suggest that collaborative learning in virtual reality can currently be conceptualised as a shared experience in an immersive, virtually mediated space, where there is a shared goal/problem which learners must attend to collaboratively. This conceptualisation implies a need to design technologies, environments, and activities that support participation and social interaction, fostering collaborative learning processes. Based on the outlined conceptualisation, we present a series of recommendations for designing for collaborative learning in immersive virtual reality. The paper concludes that collaborative learning in virtual reality creates a practice- and reflection space, where learning is perceived as engaging, without the risk of interfering with actual practices. Current designs however struggle with usability, realism, and facilitating social interaction. The paper further identifies a need for future research into what happens within virtual reality, rather than only looking at post-virtual reality evaluations.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"48 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140025690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.1007/s10055-024-00955-8
Abstract
This study aimed to examine the challenges that adult participants experienced in immersive virtual reality (I-VR). Practitioners have indicated that some challenges persist from trainee to trainee and scholars have called for the design and development of virtual reality (VR) applications based on learning theories. Thus, we examined challenges immersed learners experienced during self-discovery of game mechanics and assembly task within an early-development I-VR program. We clarified the immersive learning phenomenon by studying the self-reported problem statements from 168 university students and staff. They used an HTC Vive Pro Eye device and a custom-built software. Through an iterative content analysis of post-survey and video-stimulated recall interviews, we retrieved 481 problem statements from the participants. As a result, we derived and detailed 89 challenges, 22 component features, 11 components, and 5 principal factors of immersive learning. The most cited components that the participants found challenging were the use of controllers and functions, reciprocal software interaction, spatial and navigational constraints, relevance realisation, and learner capabilities. Closer inspection of the quantified data revealed that the participants without digital gaming experience reported relatively more hardware-related problem statements. The findings regarding the constraints of immersive learning helped clarify the various actants involved in immersive learning. In this paper, we provide a design implication summary for VR application developers. Further research on theory-based development and design implications in various immersive training settings is needed.
{"title":"Immersive virtual reality for complex skills training: content analysis of experienced challenges","authors":"","doi":"10.1007/s10055-024-00955-8","DOIUrl":"https://doi.org/10.1007/s10055-024-00955-8","url":null,"abstract":"<h3>Abstract</h3> <p>This study aimed to examine the challenges that adult participants experienced in immersive virtual reality (I-VR). Practitioners have indicated that some challenges persist from trainee to trainee and scholars have called for the design and development of virtual reality (VR) applications based on learning theories. Thus, we examined challenges immersed learners experienced during self-discovery of game mechanics and assembly task within an early-development I-VR program. We clarified the immersive learning phenomenon by studying the self-reported problem statements from 168 university students and staff. They used an HTC Vive Pro Eye device and a custom-built software. Through an iterative content analysis of post-survey and video-stimulated recall interviews, we retrieved 481 problem statements from the participants. As a result, we derived and detailed 89 challenges, 22 component features, 11 components, and 5 principal factors of immersive learning. The most cited components that the participants found challenging were the use of controllers and functions, reciprocal software interaction, spatial and navigational constraints, relevance realisation, and learner capabilities. Closer inspection of the quantified data revealed that the participants without digital gaming experience reported relatively more hardware-related problem statements. The findings regarding the constraints of immersive learning helped clarify the various actants involved in immersive learning. In this paper, we provide a design implication summary for VR application developers. Further research on theory-based development and design implications in various immersive training settings is needed.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"22 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140025703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.1007/s10055-024-00974-5
Tung-Jui Chuang, Shana Smith
Distance learning has become a popular learning channel today. However, while various distance learning tools are available, most of them only support a single platform, offer only the trainer’s perspective, and do not facilitate student-instructor interaction. As a result, distance learning systems tend to be inflexible and less effective. To address the limitations of existing distance learning systems, this study developed a cross-platform hands-on virtual lab within the Metaverse that enables multi-user participation and interaction for distance education. Four platforms, HTC VIVE Pro, Microsoft HoloLens 2, PC, and Android smartphone, are supported. The virtual lab allows trainers to demonstrate operation steps and engage with multiple trainees simultaneously. Meanwhile, trainees have the opportunity to practice their operational skills on their virtual machines within the Metaverse, utilizing their preferred platforms. Additionally, participants can explore the virtual environment and interact with each other by moving around within the virtual space, similar to a physical lab setting. The user test compares the levels of presence and usability in the hands-on virtual lab across different platforms, providing insights into the challenges associated with each platform within the Metaverse for training purposes. Furthermore, the results of the user test highlight the promising potential of the architecture due to its flexibility and adaptability.
{"title":"A Multi-user Cross-platform hands-on virtual lab within the Metaverse – the case of machining training","authors":"Tung-Jui Chuang, Shana Smith","doi":"10.1007/s10055-024-00974-5","DOIUrl":"https://doi.org/10.1007/s10055-024-00974-5","url":null,"abstract":"<p>Distance learning has become a popular learning channel today. However, while various distance learning tools are available, most of them only support a single platform, offer only the trainer’s perspective, and do not facilitate student-instructor interaction. As a result, distance learning systems tend to be inflexible and less effective. To address the limitations of existing distance learning systems, this study developed a cross-platform hands-on virtual lab within the Metaverse that enables multi-user participation and interaction for distance education. Four platforms, HTC VIVE Pro, Microsoft HoloLens 2, PC, and Android smartphone, are supported. The virtual lab allows trainers to demonstrate operation steps and engage with multiple trainees simultaneously. Meanwhile, trainees have the opportunity to practice their operational skills on their virtual machines within the Metaverse, utilizing their preferred platforms. Additionally, participants can explore the virtual environment and interact with each other by moving around within the virtual space, similar to a physical lab setting. The user test compares the levels of presence and usability in the hands-on virtual lab across different platforms, providing insights into the challenges associated with each platform within the Metaverse for training purposes. Furthermore, the results of the user test highlight the promising potential of the architecture due to its flexibility and adaptability.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"11 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140025659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1007/s10055-024-00961-w
Vincenzo Rinaldi, Karen Ann Robertson, Graham George Strong, Niamh Nic Daeid
When attending a crime scene, first responders are responsible for identifying areas of potential interest for subsequent forensic examination. This information is shared with the police, forensic practitioners, and legal authorities during an initial meeting of all interested parties, which in Scotland is known as a forensic strategy meeting. Swift documentation is fundamental to allow practitioners to learn about the scene(s) and to plan investigative strategies, traditionally relying on word-of-mouth briefings using digital photographs, videos, diagrams, and verbal reports. We suggest that these early and critical briefings can be augmented positively by implementing an end-to-end methodology for indoor 3D reconstruction and successive visualisation through immersive Virtual Reality (VR). The main objective of this paper is to provide an integrative documentation tool to enhance the decision-making processes in the early stages of the investigation. Taking a fire scene as an example, we illustrate a framework for rapid spatial data acquisition of the scene that leverages structure-from-motion photogrammetry. We developed a VR framework that enables the exploration of virtual environments on a standalone, low-cost immersive head-mounted display. The system was tested in a two-phased inter-agency fire investigation exercise, where practitioners were asked to produce hypotheses suitable for forensic strategy meetings by (1) examining traditional documentation and then (2) using a VR walkthrough of the same premises. The integration of VR increased the practitioners’ scene comprehension, improved hypotheses formulation with fewer caveats, and enabled participants to sketch the scene, in contrast to the orientation challenges encountered using conventional documentation.
{"title":"Examination of fire scene reconstructions using virtual reality to enhance forensic decision-making. A case study in Scotland.","authors":"Vincenzo Rinaldi, Karen Ann Robertson, Graham George Strong, Niamh Nic Daeid","doi":"10.1007/s10055-024-00961-w","DOIUrl":"https://doi.org/10.1007/s10055-024-00961-w","url":null,"abstract":"<p>When attending a crime scene, first responders are responsible for identifying areas of potential interest for subsequent forensic examination. This information is shared with the police, forensic practitioners, and legal authorities during an initial meeting of all interested parties, which in Scotland is known as a forensic strategy meeting. Swift documentation is fundamental to allow practitioners to learn about the scene(s) and to plan investigative strategies, traditionally relying on word-of-mouth briefings using digital photographs, videos, diagrams, and verbal reports. We suggest that these early and critical briefings can be augmented positively by implementing an end-to-end methodology for indoor 3D reconstruction and successive visualisation through immersive Virtual Reality (VR). The main objective of this paper is to provide an integrative documentation tool to enhance the decision-making processes in the early stages of the investigation. Taking a fire scene as an example, we illustrate a framework for rapid spatial data acquisition of the scene that leverages structure-from-motion photogrammetry. We developed a VR framework that enables the exploration of virtual environments on a standalone, low-cost immersive head-mounted display. The system was tested in a two-phased inter-agency fire investigation exercise, where practitioners were asked to produce hypotheses suitable for forensic strategy meetings by (1) examining traditional documentation and then (2) using a VR walkthrough of the same premises. The integration of VR increased the practitioners’ scene comprehension, improved hypotheses formulation with fewer caveats, and enabled participants to sketch the scene, in contrast to the orientation challenges encountered using conventional documentation.\u0000</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"24 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140007959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}