Pub Date : 2024-04-16DOI: 10.1007/s10055-024-00987-0
Alexander Neugebauer, Nora Castner, Björn Severitt, Katarina Stingl, Iliya Ivanov, Siegfried Wahl
In this work, we explore the potential and limitations of simulating gaze-contingent tunnel vision conditions using Virtual Reality (VR) with built-in eye tracking technology. This approach promises an easy and accessible way of expanding study populations and test groups for visual training, visual aids, or accessibility evaluations. However, it is crucial to assess the validity and reliability of simulating these types of visual impairments and evaluate the extend to which participants with simulated tunnel vision can represent real patients. Two age-matched participant groups were acquired: The first group (n = 8, aged 20–60, average 49.1 ± 13.2) consisted of patients diagnosed with Retinitis pigmentosa (RP). The second group (n = 8, aged 27–59, average 46.5 ± 10.8) consisted of visually healthy participants with simulated tunnel vision. Both groups carried out different visual tasks in a virtual environment for 30 min per day over the course of four weeks. Task performances as well as gaze characteristics were evaluated in both groups over the course of the study. Using the ’two one-sided tests for equivalence’ method, the two groups were found to perform similar in all three visual tasks. Significant differences between groups were found in different aspects of their gaze behavior, though most of these aspects seem to converge over time. Our study evaluates the potential and limitations of using Virtual Reality technology to simulate the effects of tunnel vision within controlled virtual environments. We find that the simulation accurately represents performance of RP patients in the context of group averages, but fails to fully replicate effects on gaze behavior.
{"title":"Simulating vision impairment in virtual reality: a comparison of visual task performance with real and simulated tunnel vision","authors":"Alexander Neugebauer, Nora Castner, Björn Severitt, Katarina Stingl, Iliya Ivanov, Siegfried Wahl","doi":"10.1007/s10055-024-00987-0","DOIUrl":"https://doi.org/10.1007/s10055-024-00987-0","url":null,"abstract":"<p>In this work, we explore the potential and limitations of simulating gaze-contingent tunnel vision conditions using Virtual Reality (VR) with built-in eye tracking technology. This approach promises an easy and accessible way of expanding study populations and test groups for visual training, visual aids, or accessibility evaluations. However, it is crucial to assess the validity and reliability of simulating these types of visual impairments and evaluate the extend to which participants with simulated tunnel vision can represent real patients. Two age-matched participant groups were acquired: The first group (n = 8, aged 20–60, average 49.1 ± 13.2) consisted of patients diagnosed with Retinitis pigmentosa (RP). The second group (n = 8, aged 27–59, average 46.5 ± 10.8) consisted of visually healthy participants with simulated tunnel vision. Both groups carried out different visual tasks in a virtual environment for 30 min per day over the course of four weeks. Task performances as well as gaze characteristics were evaluated in both groups over the course of the study. Using the ’two one-sided tests for equivalence’ method, the two groups were found to perform similar in all three visual tasks. Significant differences between groups were found in different aspects of their gaze behavior, though most of these aspects seem to converge over time. Our study evaluates the potential and limitations of using Virtual Reality technology to simulate the effects of tunnel vision within controlled virtual environments. We find that the simulation accurately represents performance of RP patients in the context of group averages, but fails to fully replicate effects on gaze behavior.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"12 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140569448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1007/s10055-024-00976-3
Yuqing Sun, Tianran Yuan, Yimin Wang, Quanping Sun, Zhiwei Hou, Juan Du
Aimed at limitations in the description and expression of three-dimensional (3D) physical information in two-dimentsional (2D) medical images, feature extraction and matching method based on the biomedical characteristics of skeletons is employed in this paper to map the 2D images of skeletons into a 3D digital model. Augmented reality technique is used to realize the interactive presentation of skeleton models. Main contents of this paper include: Firstly, a three-step reconstruction method is used to process the bone CT image data to obtain its three-dimensional surface model, and the corresponding 2D–3D bone library is established based on the identification index of the 2D image and the 3D model; then, a fast and accurate feature extraction and matching algorithm is developed to realize the recognition, extraction, and matching of 2D skeletal features, and determine the corresponding 3D skeleton model according to the matching result. Finally, based on the augmented reality technique, an interactive immersive presentation system is designed to achieve visual effects of the virtual human bone model superimposed and rendered in the world scenes, which improves the effectiveness of information expression and transmission, as well as the user's immersion and embodied experience.
{"title":"Augmented reality presentation system of skeleton image based on biomedical features","authors":"Yuqing Sun, Tianran Yuan, Yimin Wang, Quanping Sun, Zhiwei Hou, Juan Du","doi":"10.1007/s10055-024-00976-3","DOIUrl":"https://doi.org/10.1007/s10055-024-00976-3","url":null,"abstract":"<p>Aimed at limitations in the description and expression of three-dimensional (3D) physical information in two-dimentsional (2D) medical images, feature extraction and matching method based on the biomedical characteristics of skeletons is employed in this paper to map the 2D images of skeletons into a 3D digital model. Augmented reality technique is used to realize the interactive presentation of skeleton models. Main contents of this paper include: Firstly, a three-step reconstruction method is used to process the bone CT image data to obtain its three-dimensional surface model, and the corresponding 2D–3D bone library is established based on the identification index of the 2D image and the 3D model; then, a fast and accurate feature extraction and matching algorithm is developed to realize the recognition, extraction, and matching of 2D skeletal features, and determine the corresponding 3D skeleton model according to the matching result. Finally, based on the augmented reality technique, an interactive immersive presentation system is designed to achieve visual effects of the virtual human bone model superimposed and rendered in the world scenes, which improves the effectiveness of information expression and transmission, as well as the user's immersion and embodied experience.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"29 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140614034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.1007/s10055-024-00993-2
Arthur Maneuvrier
This study explores the effect of the experimenter’s gender/sex and its interaction with the participant’s gender/sex as potential contributors to the replicability crisis, particularly in the man-gendered domain of VR. 75 young men and women from Western France were randomly evaluated by either a man or a woman during a 13-min immersion in a first-person shooter game. Self-administered questionnaires were used to measure variables commonly assessed during VR experiments (sense of presence, cybersickness, video game experience, flow). MANOVAs, ANOVAs and post-hoc comparisons were used. Results indicate that men and women differ in their reports of cybersickness and video game experience when rated by men, whereas they report similar measures when rated by women. These findings are interpreted as consequences of the psychosocial stress triggered by the interaction between the two genders/sexes, as well as the gender conformity effect induced, particularly in women, by the presence of a man in a masculine domain. Corroborating this interpretation, the subjective measure of flow, which is not linked to video games and/or computers, does not seem to be affected by this experimental effect. Methodological precautions are highlighted, notably the brief systematic description of the experimenter, and future exploratory and confirmatory studies are outlined.