Pub Date : 2023-08-21DOI: 10.3389/frvir.2023.1236095
Leyla Haghzare, Xiaona Ping, Matthew R. Arnison, David Monaghan, David Karlov, V. Honson, Juno Kim
Improving the digital presentation of fabrics enhances the online shopping experience and, in turn, reduces textile waste. In this study, we examined how the manipulation of simple surface reflectance models can bias the perception of fabric properties simulated online in a web browser. We showed that motion and three-dimensional (3D) folds (i.e., rumple) influence the perception of sheen for different fabric types (cotton knit and satin). Also, we found complex interactions between these parameters in their effects on perceived sheen and perceived color saturation. Moreover, we showed that changing the level of specular roughness significantly influences visual perception of sheen, color and lightness, which in turn, can categorically alter perceptual judgments of material type. In contrast to visual attributes, specular roughness did not influence visually perceived tactile characteristics of digital fabrics (thickness and stretch). The knowledge gained about perceptual biases of digital fabrics from this study will inform future considerations for optimizing the fidelity of textiles depicted in digital commerce.
{"title":"Digital fabrics for online shopping and fashion design","authors":"Leyla Haghzare, Xiaona Ping, Matthew R. Arnison, David Monaghan, David Karlov, V. Honson, Juno Kim","doi":"10.3389/frvir.2023.1236095","DOIUrl":"https://doi.org/10.3389/frvir.2023.1236095","url":null,"abstract":"Improving the digital presentation of fabrics enhances the online shopping experience and, in turn, reduces textile waste. In this study, we examined how the manipulation of simple surface reflectance models can bias the perception of fabric properties simulated online in a web browser. We showed that motion and three-dimensional (3D) folds (i.e., rumple) influence the perception of sheen for different fabric types (cotton knit and satin). Also, we found complex interactions between these parameters in their effects on perceived sheen and perceived color saturation. Moreover, we showed that changing the level of specular roughness significantly influences visual perception of sheen, color and lightness, which in turn, can categorically alter perceptual judgments of material type. In contrast to visual attributes, specular roughness did not influence visually perceived tactile characteristics of digital fabrics (thickness and stretch). The knowledge gained about perceptual biases of digital fabrics from this study will inform future considerations for optimizing the fidelity of textiles depicted in digital commerce.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44657241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-17DOI: 10.3389/frvir.2023.1211001
Caroline L. Kuhne, Eda D. Kecelioglu, S. Maltby, Rebecca Hood, B. Knott, Elizabeth Ditton, F. R. Walker, M. Kluge
Introduction: Virtual-reality (VR) technology has, over the last decade, quickly expanded from gaming into other sectors including training, education, and wellness. One of the most popular justifications for the use of VR over 2D is increased immersion and engagement. However, very little fundamental research has been produced evaluating the comparative impact of immersive VR on the user’s cognitive, physiological, and emotional state.Methods: A within-subject cross-over study design was used to directly compare VR and 2D screen delivery of different subject matter content. Both physiological and self-report data were collected for scenes containing calming nature environments, aggressive social confrontations, and neutral content.Results: Compared to 2D, the VR delivery resulted in a higher sense of presence, higher ratings of engagement, fun, and privacy. Confrontational scenes were rated as more tense whilst calming scenes were rated as more relaxing when presented in VR compared to 2D. Physiological data indicated that the scenes promoted overall states of arousal and relaxation in accordance with the scene subject matter (both VR and 2D). However, heart rate (HR) and galvanic skin response (GSR) were consistently higher throughout the VR delivery condition compared to 2D, including responses during scenes of neutral and calming subject matter.Discussion: This discrepancy between emotional and physiological responses for calming and neutral content in VR suggest an elevated arousal response driven by VR immersion that is independent of the emotional and physiological responses to the subject matter itself. These findings have important implications for those looking to develop and utilize VR technology as a training and educational tool as they provide insights into the impact of immersion on the user.
{"title":"Direct comparison of virtual reality and 2D delivery on sense of presence, emotional and physiological outcome measures","authors":"Caroline L. Kuhne, Eda D. Kecelioglu, S. Maltby, Rebecca Hood, B. Knott, Elizabeth Ditton, F. R. Walker, M. Kluge","doi":"10.3389/frvir.2023.1211001","DOIUrl":"https://doi.org/10.3389/frvir.2023.1211001","url":null,"abstract":"Introduction: Virtual-reality (VR) technology has, over the last decade, quickly expanded from gaming into other sectors including training, education, and wellness. One of the most popular justifications for the use of VR over 2D is increased immersion and engagement. However, very little fundamental research has been produced evaluating the comparative impact of immersive VR on the user’s cognitive, physiological, and emotional state.Methods: A within-subject cross-over study design was used to directly compare VR and 2D screen delivery of different subject matter content. Both physiological and self-report data were collected for scenes containing calming nature environments, aggressive social confrontations, and neutral content.Results: Compared to 2D, the VR delivery resulted in a higher sense of presence, higher ratings of engagement, fun, and privacy. Confrontational scenes were rated as more tense whilst calming scenes were rated as more relaxing when presented in VR compared to 2D. Physiological data indicated that the scenes promoted overall states of arousal and relaxation in accordance with the scene subject matter (both VR and 2D). However, heart rate (HR) and galvanic skin response (GSR) were consistently higher throughout the VR delivery condition compared to 2D, including responses during scenes of neutral and calming subject matter.Discussion: This discrepancy between emotional and physiological responses for calming and neutral content in VR suggest an elevated arousal response driven by VR immersion that is independent of the emotional and physiological responses to the subject matter itself. These findings have important implications for those looking to develop and utilize VR technology as a training and educational tool as they provide insights into the impact of immersion on the user.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43315284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-14DOI: 10.3389/frvir.2023.1189717
Yuanjie Wu, S. Lukosch, Heide Lukosch, R. Lindeman, R. Mckee, Shunsuke Fukuden, Cameron Ross, D. Collins
Mental imagery practice is widely used to help athletes prepare for competitions, as it can produce motor actions that enhance performance. The goal of imagery training for athletes is to create realistic images in their minds and to familiarize them with certain procedures, environments, and other aspects related to competition. Traditional imagery training methods use still images or videos, and athletes study the pictures or watch the videos in order to mentally rehearse. However, factors such as distractions and low realism can affect the training quality. In this paper, we present a Virtual Reality (VR) solution and a study that explores our hypotheses that H1: high-fidelity VR systems improve mental imagery skills, that H2: the presence of elements such as virtual onlookers or photographers in the VR environment arouse stronger emotional reactions and affect, and that H3: the presence of elements such as onlookers or photographers in the VR environment results in better mental imagery skill improvement. For that purpose, seven elite snow sports athletes were exposed to three training methods, Video, VR-Empty, and VR-Crowded. Our results show that a VR simulation with virtual onlookers (VR-Crowded) can significantly increase heart rate, which can induce increased emotional arousal. The results from validated questionnaires show no significant difference for the three training methods in terms of mental imagery and affect, but the results show an ascending trend for the athlete’s arousal from Video to the VR-Crowded condition. Gaze detection heat maps of interest areas for the two VR conditions support hypothesis H2 that environmental factors such as the presence of photographers, staff, and onlookers can increase head and eye movement, possibly indicating an increase in emotional arousal during imagery training. According to verbal feedback and interviews, athletes are more likely to use innovative training methods (e.g., the high-fidelity VR method) than traditional video-training methods.
{"title":"Training mental imagery skills of elite athletes in virtual reality","authors":"Yuanjie Wu, S. Lukosch, Heide Lukosch, R. Lindeman, R. Mckee, Shunsuke Fukuden, Cameron Ross, D. Collins","doi":"10.3389/frvir.2023.1189717","DOIUrl":"https://doi.org/10.3389/frvir.2023.1189717","url":null,"abstract":"Mental imagery practice is widely used to help athletes prepare for competitions, as it can produce motor actions that enhance performance. The goal of imagery training for athletes is to create realistic images in their minds and to familiarize them with certain procedures, environments, and other aspects related to competition. Traditional imagery training methods use still images or videos, and athletes study the pictures or watch the videos in order to mentally rehearse. However, factors such as distractions and low realism can affect the training quality. In this paper, we present a Virtual Reality (VR) solution and a study that explores our hypotheses that H1: high-fidelity VR systems improve mental imagery skills, that H2: the presence of elements such as virtual onlookers or photographers in the VR environment arouse stronger emotional reactions and affect, and that H3: the presence of elements such as onlookers or photographers in the VR environment results in better mental imagery skill improvement. For that purpose, seven elite snow sports athletes were exposed to three training methods, Video, VR-Empty, and VR-Crowded. Our results show that a VR simulation with virtual onlookers (VR-Crowded) can significantly increase heart rate, which can induce increased emotional arousal. The results from validated questionnaires show no significant difference for the three training methods in terms of mental imagery and affect, but the results show an ascending trend for the athlete’s arousal from Video to the VR-Crowded condition. Gaze detection heat maps of interest areas for the two VR conditions support hypothesis H2 that environmental factors such as the presence of photographers, staff, and onlookers can increase head and eye movement, possibly indicating an increase in emotional arousal during imagery training. According to verbal feedback and interviews, athletes are more likely to use innovative training methods (e.g., the high-fidelity VR method) than traditional video-training methods.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47314295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-14DOI: 10.3389/frvir.2023.1221651
Florian Ramousse, Pierre Raimbaud, P. Baert, C. Helfenstein-Didier, A. Gay, C. Massoubre, B. Galusca, G. Lavoué
Introduction: Studies into food-related behaviors and emotions are increasingly being explored with Virtual Reality (VR). Applications of VR technologies for food science include eating disorder therapies, eating behavior studies and sensory analyzes. These applications involve 3D food stimuli intended to elicit cravings, stress, and/or emotions. However, the visual quality (i.e., the realism) of used food stimuli is heterogeneous, and this factor’s influence on the results has never been isolated and evaluated. In this context, this work aims to study how the visual quality of food stimuli, exposed in a virtual reality environment, influences the resulting desire to eat.Methods: 28 subjects without eating disorders were included in this protocol, who evaluated the desire to eat induced by 10 3D food stimuli, each duplicated in 7 quality levels (for a total of 70 stimuli).Results: Results show that visual quality influences the desire to eat, and this effect depends on the type of food and users’ eating habits. We found two significant thresholds for visual quality: the first provides the minimal quality necessary to elicit a significant desire to eat, while the second provides the ceiling value above which increasing the quality does not improve further the desire to eat.Discussion: These results allow us to provide useful recommendations for the design of experiments involving food stimuli.
{"title":"Does this virtual food make me hungry? effects of visual quality and food type in virtual reality","authors":"Florian Ramousse, Pierre Raimbaud, P. Baert, C. Helfenstein-Didier, A. Gay, C. Massoubre, B. Galusca, G. Lavoué","doi":"10.3389/frvir.2023.1221651","DOIUrl":"https://doi.org/10.3389/frvir.2023.1221651","url":null,"abstract":"Introduction: Studies into food-related behaviors and emotions are increasingly being explored with Virtual Reality (VR). Applications of VR technologies for food science include eating disorder therapies, eating behavior studies and sensory analyzes. These applications involve 3D food stimuli intended to elicit cravings, stress, and/or emotions. However, the visual quality (i.e., the realism) of used food stimuli is heterogeneous, and this factor’s influence on the results has never been isolated and evaluated. In this context, this work aims to study how the visual quality of food stimuli, exposed in a virtual reality environment, influences the resulting desire to eat.Methods: 28 subjects without eating disorders were included in this protocol, who evaluated the desire to eat induced by 10 3D food stimuli, each duplicated in 7 quality levels (for a total of 70 stimuli).Results: Results show that visual quality influences the desire to eat, and this effect depends on the type of food and users’ eating habits. We found two significant thresholds for visual quality: the first provides the minimal quality necessary to elicit a significant desire to eat, while the second provides the ceiling value above which increasing the quality does not improve further the desire to eat.Discussion: These results allow us to provide useful recommendations for the design of experiments involving food stimuli.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45873839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-09DOI: 10.3389/frvir.2023.1177855
Lee Lisle, Kylie Davidson, Edward J. K. Gitre, Chris North, D. Bowman
Analysts perform sensemaking on large complex multimedia datasets in order to extract concepts, themes, and other kinds of insights from them. Immersive analytics, in particular, puts users in virtual environments that allow them to explore data in a unique way where they can interact and move through the data. Previous research using virtual reality immersive analytics tools found users wanting to refer to real-world objects or understand the physical world around them while continuing to perform their analysis. Therefore, we designed and ran a comparative study looking at the tradeoffs between virtual and augmented reality for our immersive analytics approach: Immersive Space to Think. Through two mixed-methods studies we found that virtual reality affords users a space where users can focus more on their task, but augmented reality allows them to use various real-world tools that can increase user satisfaction. In future immersive analytics tools, we recommend a blend of the two—augmented virtuality—with pass-through portals which allow users to see various real-world tools, such as whiteboards or desks and keyboards, while still giving themselves a space to focus.
{"title":"Different realities: a comparison of augmented and virtual reality for the sensemaking process","authors":"Lee Lisle, Kylie Davidson, Edward J. K. Gitre, Chris North, D. Bowman","doi":"10.3389/frvir.2023.1177855","DOIUrl":"https://doi.org/10.3389/frvir.2023.1177855","url":null,"abstract":"Analysts perform sensemaking on large complex multimedia datasets in order to extract concepts, themes, and other kinds of insights from them. Immersive analytics, in particular, puts users in virtual environments that allow them to explore data in a unique way where they can interact and move through the data. Previous research using virtual reality immersive analytics tools found users wanting to refer to real-world objects or understand the physical world around them while continuing to perform their analysis. Therefore, we designed and ran a comparative study looking at the tradeoffs between virtual and augmented reality for our immersive analytics approach: Immersive Space to Think. Through two mixed-methods studies we found that virtual reality affords users a space where users can focus more on their task, but augmented reality allows them to use various real-world tools that can increase user satisfaction. In future immersive analytics tools, we recommend a blend of the two—augmented virtuality—with pass-through portals which allow users to see various real-world tools, such as whiteboards or desks and keyboards, while still giving themselves a space to focus.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47946049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-04DOI: 10.3389/frvir.2023.1267071
{"title":"Erratum: Real-time affect detection in virtual reality: a technique based on a three-dimensional model of affect and EEG signals","authors":"","doi":"10.3389/frvir.2023.1267071","DOIUrl":"https://doi.org/10.3389/frvir.2023.1267071","url":null,"abstract":"","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49271766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-04DOI: 10.3389/frvir.2023.1187883
Maximilian Rettinger, G. Rigoll
Virtual reality offers exciting new opportunities for training. This inspires more and more training fields to move from the real world to virtual reality, but some modalities are lost in this transition. In the real world, participants can physically interact with the training material; virtual reality offers several interaction possibilities, but do these affect the training’s success, and if yes, how? To find out how interaction methods influence the learning outcome, we evaluate the following four methods based on ordnance disposal training for civilians: 1) Real-World, 2) Controller-VR, 3) Free-Hand-VR, and 4) Tangible-VR in a between-subjects experiment (n = 100). We show that the Free-Hand-VR method lacks haptic realism and has the worst training outcome. Training with haptic feedback, e.g., Controller-VR, Tangible-VR, and Real-World, lead to a better overall learning effect and matches the participant’s self-assessment. Overall, the results indicate that free-hand interaction is improved by the extension of a tracked tangible object, but the controller-based interaction is most suitable for VR training.
{"title":"Touching the future of training: investigating tangible interaction in virtual reality","authors":"Maximilian Rettinger, G. Rigoll","doi":"10.3389/frvir.2023.1187883","DOIUrl":"https://doi.org/10.3389/frvir.2023.1187883","url":null,"abstract":"Virtual reality offers exciting new opportunities for training. This inspires more and more training fields to move from the real world to virtual reality, but some modalities are lost in this transition. In the real world, participants can physically interact with the training material; virtual reality offers several interaction possibilities, but do these affect the training’s success, and if yes, how? To find out how interaction methods influence the learning outcome, we evaluate the following four methods based on ordnance disposal training for civilians: 1) Real-World, 2) Controller-VR, 3) Free-Hand-VR, and 4) Tangible-VR in a between-subjects experiment (n = 100). We show that the Free-Hand-VR method lacks haptic realism and has the worst training outcome. Training with haptic feedback, e.g., Controller-VR, Tangible-VR, and Real-World, lead to a better overall learning effect and matches the participant’s self-assessment. Overall, the results indicate that free-hand interaction is improved by the extension of a tracked tangible object, but the controller-based interaction is most suitable for VR training.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45021327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-04DOI: 10.3389/frvir.2023.1214520
T. Fick, J. Meulstee, M. Köllen, J. V. van Doormaal, T. V. van Doormaal, E. Hoving
Background: Multiple 3D visualization techniques are available that obviates the need for the surgeon to mentally transform the 2D planes from MRI to the 3D anatomy of the patient. We assessed the spatial understanding of a brain tumour when visualized with MRI, 3D models on a monitor or 3D models in mixed reality.Methods: Medical students, neurosurgical residents and neurosurgeons were divided into three groups based on the imaging modality used for preparation: MRI, 3D viewer and mixed reality. After preparation, the participants needed to position, scale, and rotate a virtual tumour inside a virtual head of the patient in the same orientation as the original tumour would be. Primary outcome was the amount of overlap between the placed tumour and the original tumour to evaluate accuracy. Secondary outcomes were the position, volume and rotation deviation compared to the original tumour.Results: A total of 12 medical students, 12 neurosurgical residents, and 12 neurosurgeons were included. For medical students, the mean amount of overlap for the MRI, 3D viewer and mixed reality group was 0.26 (0.22), 0.38 (0.20) and 0.48 (0.20) respectively. For residents 0.45 (0.23), 0.45 (0.19) and 0.68 (0.11) and for neurosurgeons 0.39 (0.20), 0.50 (0.27) and 0.67 (0.14). The amount of overlap for mixed reality was significantly higher on all expertise levels compared to MRI and on resident and neurosurgeon level also compared to the 3D viewer. Furthermore, mixed reality showed the lowest deviations in position, volume and rotation on all expertise levels.Conclusion: Mixed reality enhances the spatial understanding of brain tumours compared to MRI and 3D models on a monitor. The preoperative use of mixed reality may therefore support the surgeon to improve spatial 3D related surgical tasks such as patient positioning and planning surgical trajectories.
{"title":"Comparing the influence of mixed reality, a 3D viewer, and MRI on the spatial understanding of brain tumours","authors":"T. Fick, J. Meulstee, M. Köllen, J. V. van Doormaal, T. V. van Doormaal, E. Hoving","doi":"10.3389/frvir.2023.1214520","DOIUrl":"https://doi.org/10.3389/frvir.2023.1214520","url":null,"abstract":"Background: Multiple 3D visualization techniques are available that obviates the need for the surgeon to mentally transform the 2D planes from MRI to the 3D anatomy of the patient. We assessed the spatial understanding of a brain tumour when visualized with MRI, 3D models on a monitor or 3D models in mixed reality.Methods: Medical students, neurosurgical residents and neurosurgeons were divided into three groups based on the imaging modality used for preparation: MRI, 3D viewer and mixed reality. After preparation, the participants needed to position, scale, and rotate a virtual tumour inside a virtual head of the patient in the same orientation as the original tumour would be. Primary outcome was the amount of overlap between the placed tumour and the original tumour to evaluate accuracy. Secondary outcomes were the position, volume and rotation deviation compared to the original tumour.Results: A total of 12 medical students, 12 neurosurgical residents, and 12 neurosurgeons were included. For medical students, the mean amount of overlap for the MRI, 3D viewer and mixed reality group was 0.26 (0.22), 0.38 (0.20) and 0.48 (0.20) respectively. For residents 0.45 (0.23), 0.45 (0.19) and 0.68 (0.11) and for neurosurgeons 0.39 (0.20), 0.50 (0.27) and 0.67 (0.14). The amount of overlap for mixed reality was significantly higher on all expertise levels compared to MRI and on resident and neurosurgeon level also compared to the 3D viewer. Furthermore, mixed reality showed the lowest deviations in position, volume and rotation on all expertise levels.Conclusion: Mixed reality enhances the spatial understanding of brain tumours compared to MRI and 3D models on a monitor. The preoperative use of mixed reality may therefore support the surgeon to improve spatial 3D related surgical tasks such as patient positioning and planning surgical trajectories.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43212776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-02DOI: 10.3389/frvir.2023.1194019
Pitch Sinlapanuntakul, Jenna Korentsides, B. Chaparro
Augmented reality is an emergent form of technology that allows users to interact with and manipulate virtual objects and information integrated into the physical environment. Whether it is replying to browser-based emails or playing a game, completing such tasks in augmented reality requires the use of hand-tracking gestures or interactions. With the anticipated growth of this technology, future users may experience it for extended periods with a variety of applications (e.g., metaverse). This study explores the perceptions and user experience of individuals when interacting with and maneuvering in a multi-window augmented reality environment, using a range of hand-tracking interactions. The results provide both qualitative and quantitative insights into these interactions, highlighting the impact of perceived usability, subjective user experience, perceived difficulty, and perceived workload on task completion.
{"title":"Exploring the user experience (UX) of a multi-window augmented reality environment","authors":"Pitch Sinlapanuntakul, Jenna Korentsides, B. Chaparro","doi":"10.3389/frvir.2023.1194019","DOIUrl":"https://doi.org/10.3389/frvir.2023.1194019","url":null,"abstract":"Augmented reality is an emergent form of technology that allows users to interact with and manipulate virtual objects and information integrated into the physical environment. Whether it is replying to browser-based emails or playing a game, completing such tasks in augmented reality requires the use of hand-tracking gestures or interactions. With the anticipated growth of this technology, future users may experience it for extended periods with a variety of applications (e.g., metaverse). This study explores the perceptions and user experience of individuals when interacting with and maneuvering in a multi-window augmented reality environment, using a range of hand-tracking interactions. The results provide both qualitative and quantitative insights into these interactions, highlighting the impact of perceived usability, subjective user experience, perceived difficulty, and perceived workload on task completion.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43945247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-01DOI: 10.3389/frvir.2023.1235004
Phil Lopes, Jan-Niklas Voigt-Antons, Jaime Garcia, Dávid Melhárt
{"title":"Editorial: User states in extended reality media experiences for entertainment games","authors":"Phil Lopes, Jan-Niklas Voigt-Antons, Jaime Garcia, Dávid Melhárt","doi":"10.3389/frvir.2023.1235004","DOIUrl":"https://doi.org/10.3389/frvir.2023.1235004","url":null,"abstract":"","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46100875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}