Pub Date : 2024-05-04DOI: 10.1007/s10055-024-01004-0
Nick Michiels, Lode Jorissen, Jeroen Put, Jori Liesenborgs, Isjtar Vandebroeck, Eric Joris, Frank Van Reeth
Extended reality (XR) experiences are on the verge of becoming widely adopted in diverse application domains. An essential part of the technology is accurate tracking and localization of the headset to create an immersive experience. A subset of the applications require perfect co-location between the real and the virtual world, where virtual objects are aligned with real-world counterparts. Current headsets support co-location for small areas, but suffer from drift when scaling up to larger ones such as buildings or factories. This paper proposes tools and solutions for this challenge by splitting up the simultaneous localization and mapping (SLAM) into separate mapping and localization stages. In the pre-processing stage, a feature map is built for the entire tracking area. A global optimizer is applied to correct the deformations caused by drift, guided by a sparse set of ground truth markers in the point cloud of a laser scan. Optionally, further refinement is applied by matching features between the ground truth keyframe images and their rendered-out SLAM estimates of the point cloud. In the second, real-time stage, the rectified feature map is used to perform localization and sensor fusion between the global tracking and the headset. The results show that the approach achieves robust co-location between the virtual and the real 3D environment for large and complex tracking environments.
扩展现实(XR)体验即将被广泛应用于各种应用领域。该技术的一个重要部分是对耳机进行精确跟踪和定位,以创造身临其境的体验。其中一部分应用要求在现实世界和虚拟世界之间实现完美的协同定位,即虚拟物体与现实世界中的对应物体保持一致。目前的头显支持小范围内的协同定位,但当扩展到建筑物或工厂等更大范围时,就会出现偏移。本文通过将同步定位和映射(SLAM)分成独立的映射和定位阶段,提出了应对这一挑战的工具和解决方案。在预处理阶段,为整个跟踪区域绘制特征图。在激光扫描点云中稀疏的地面实况标记集的引导下,应用全局优化器修正漂移引起的变形。此外,还可通过匹配地面实况关键帧图像与其渲染出的点云 SLAM 估计值之间的特征来进一步完善。在第二阶段,即实时阶段,校正后的特征图用于在全局跟踪和耳机之间执行定位和传感器融合。结果表明,该方法可在大型复杂跟踪环境中实现虚拟和真实三维环境之间的稳健协同定位。
{"title":"Tracking and co-location of global point clouds for large-area indoor environments","authors":"Nick Michiels, Lode Jorissen, Jeroen Put, Jori Liesenborgs, Isjtar Vandebroeck, Eric Joris, Frank Van Reeth","doi":"10.1007/s10055-024-01004-0","DOIUrl":"https://doi.org/10.1007/s10055-024-01004-0","url":null,"abstract":"<p>Extended reality (XR) experiences are on the verge of becoming widely adopted in diverse application domains. An essential part of the technology is accurate tracking and localization of the headset to create an immersive experience. A subset of the applications require perfect co-location between the real and the virtual world, where virtual objects are aligned with real-world counterparts. Current headsets support co-location for small areas, but suffer from drift when scaling up to larger ones such as buildings or factories. This paper proposes tools and solutions for this challenge by splitting up the simultaneous localization and mapping (SLAM) into separate mapping and localization stages. In the pre-processing stage, a feature map is built for the entire tracking area. A global optimizer is applied to correct the deformations caused by drift, guided by a sparse set of ground truth markers in the point cloud of a laser scan. Optionally, further refinement is applied by matching features between the ground truth keyframe images and their rendered-out SLAM estimates of the point cloud. In the second, real-time stage, the rectified feature map is used to perform localization and sensor fusion between the global tracking and the headset. The results show that the approach achieves robust co-location between the virtual and the real 3D environment for large and complex tracking environments.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"18 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1007/s10055-024-00998-x
Fotios Spyridonis, Damon Daylamani-Zad, James Nightingale
Speech anxiety, or Glossophobia, currently affects approximately 75% of the population with potentially severe negative effects on those with this condition. There are several treatments currently available with research showing that the use of Virtual Reality (VR) as a non-pharmacologic treatment can have positive effects on individuals suffering from such social phobias. However, there is a significant lack of treatments currently available for speech anxiety, even though such a large number of the population are affected by it. In this paper, we aim to contribute to efforts to improve the effects of speech anxiety through a VR intervention. Our VR solution was designed following the Exposure Therapy approach for treating social anxiety disorders. The evaluation of this work was twofold: A. to assess the ability of our solution to positively change participants’ perception of factors related to non-verbal communication contributing to anxiety toward public speaking, and B. to determine whether it is able to induce a sense of presence. We carried out an empirical evaluation study that measured participants’ self-reported anxiety level towards public speaking using the Personal Report of Public Speaking Anxiety and their perceived sense of presence using the iGroup Presence Questionnaire. Our results demonstrate the potential of VR Exposure Therapy solutions to assist towards positively changing perception of factors related to non-verbal communication skills that contribute to increasing public speaking anxiety for participants suffering from self-reported speech anxiety symptoms. Our findings are of wider importance as they contribute to ongoing efforts to improve social anxiety-related phobias.
演讲焦虑症(或称 "语言恐惧症")目前影响着大约 75% 的人口,对患有这种疾病的人可能会产生严重的负面影响。目前有几种治疗方法,研究表明,使用虚拟现实(VR)作为非药物治疗方法可以对患有这种社交恐惧症的人产生积极影响。然而,尽管有如此多的人受到言语焦虑症的影响,但目前仍严重缺乏针对言语焦虑症的治疗方法。在本文中,我们希望通过 VR 干预来改善言语焦虑症的影响。我们的 VR 解决方案是按照治疗社交焦虑症的暴露疗法方法设计的。这项工作的评估有两个方面:A. 评估我们的解决方案是否能够积极改变参与者对造成公开演讲焦虑的非语言交流相关因素的看法,以及 B. 确定它是否能够诱发临场感。我们开展了一项实证评估研究,使用 "个人公开演讲焦虑报告"(Personal Report of Public Speaking Anxiety)测量参与者自我报告的公开演讲焦虑水平,并使用 "iGroup 临场感问卷"(iGroup Presence Questionnaire)测量他们感知到的临场感。我们的研究结果表明,VR 暴露疗法方案可以帮助患有自述演讲焦虑症状的参与者积极改变对非语言沟通技巧相关因素的认知,而这些因素会增加他们的公开演讲焦虑。我们的研究结果具有更广泛的意义,因为它们有助于改善与社交焦虑相关的恐惧症。
{"title":"PublicVR: a virtual reality exposure therapy intervention for adults with speech anxiety","authors":"Fotios Spyridonis, Damon Daylamani-Zad, James Nightingale","doi":"10.1007/s10055-024-00998-x","DOIUrl":"https://doi.org/10.1007/s10055-024-00998-x","url":null,"abstract":"<p>Speech anxiety, or Glossophobia, currently affects approximately 75% of the population with potentially severe negative effects on those with this condition. There are several treatments currently available with research showing that the use of Virtual Reality (VR) as a non-pharmacologic treatment can have positive effects on individuals suffering from such social phobias. However, there is a significant lack of treatments currently available for speech anxiety, even though such a large number of the population are affected by it. In this paper, we aim to contribute to efforts to improve the effects of speech anxiety through a VR intervention. Our VR solution was designed following the Exposure Therapy approach for treating social anxiety disorders. The evaluation of this work was twofold: A. to assess the ability of our solution to positively change participants’ perception of factors related to non-verbal communication contributing to anxiety toward public speaking, and B. to determine whether it is able to induce a sense of presence. We carried out an empirical evaluation study that measured participants’ self-reported anxiety level towards public speaking using the Personal Report of Public Speaking Anxiety and their perceived sense of presence using the iGroup Presence Questionnaire. Our results demonstrate the potential of VR Exposure Therapy solutions to assist towards positively changing perception of factors related to non-verbal communication skills that contribute to increasing public speaking anxiety for participants suffering from self-reported speech anxiety symptoms. Our findings are of wider importance as they contribute to ongoing efforts to improve social anxiety-related phobias.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"49 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140841121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-26DOI: 10.1007/s10055-024-00997-y
Jieun Lee, Seokhyun Hwang, Kyunghwan Kim, SeungJun Kim
In virtual reality, redirected walking (RDW) enables users to stay within the tracking area while feeling that they are traveling in a virtual space that is larger than the physical space. RDW uses a visual attractor to the user’s sight and scene manipulation for intermittent reorientation. However, repeated usage can hinder the virtual world immersion and weaken the reorientation performance. In this study, we propose using sounds and smells as alternative stimuli to draw the user’s attention implicitly and sustain the attractor’s performance for intermittent reorientation. To achieve this, we integrated visual, auditory, and olfactory attractors into an all-in-one stimulation system. Experiments revealed that the auditory attractor caused the fastest reorientation, the olfactory attractor induced the widest angular difference, and the attractor with the combined auditory and olfactory stimuli induced the largest angular speed, keeping users from noticing the manipulation. The findings demonstrate the potential of nonvisual attractors to reorient users in situations requiring intermittent reorientation.
{"title":"Evaluation of visual, auditory, and olfactory stimulus-based attractors for intermittent reorientation in virtual reality locomotion","authors":"Jieun Lee, Seokhyun Hwang, Kyunghwan Kim, SeungJun Kim","doi":"10.1007/s10055-024-00997-y","DOIUrl":"https://doi.org/10.1007/s10055-024-00997-y","url":null,"abstract":"<p>In virtual reality, redirected walking (RDW) enables users to stay within the tracking area while feeling that they are traveling in a virtual space that is larger than the physical space. RDW uses a visual attractor to the user’s sight and scene manipulation for intermittent reorientation. However, repeated usage can hinder the virtual world immersion and weaken the reorientation performance. In this study, we propose using sounds and smells as alternative stimuli to draw the user’s attention implicitly and sustain the attractor’s performance for intermittent reorientation. To achieve this, we integrated visual, auditory, and olfactory attractors into an all-in-one stimulation system. Experiments revealed that the auditory attractor caused the fastest reorientation, the olfactory attractor induced the widest angular difference, and the attractor with the combined auditory and olfactory stimuli induced the largest angular speed, keeping users from noticing the manipulation. The findings demonstrate the potential of nonvisual attractors to reorient users in situations requiring intermittent reorientation.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"36 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140806506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1007/s10055-024-00987-0
Alexander Neugebauer, Nora Castner, Björn Severitt, Katarina Stingl, Iliya Ivanov, Siegfried Wahl
In this work, we explore the potential and limitations of simulating gaze-contingent tunnel vision conditions using Virtual Reality (VR) with built-in eye tracking technology. This approach promises an easy and accessible way of expanding study populations and test groups for visual training, visual aids, or accessibility evaluations. However, it is crucial to assess the validity and reliability of simulating these types of visual impairments and evaluate the extend to which participants with simulated tunnel vision can represent real patients. Two age-matched participant groups were acquired: The first group (n = 8, aged 20–60, average 49.1 ± 13.2) consisted of patients diagnosed with Retinitis pigmentosa (RP). The second group (n = 8, aged 27–59, average 46.5 ± 10.8) consisted of visually healthy participants with simulated tunnel vision. Both groups carried out different visual tasks in a virtual environment for 30 min per day over the course of four weeks. Task performances as well as gaze characteristics were evaluated in both groups over the course of the study. Using the ’two one-sided tests for equivalence’ method, the two groups were found to perform similar in all three visual tasks. Significant differences between groups were found in different aspects of their gaze behavior, though most of these aspects seem to converge over time. Our study evaluates the potential and limitations of using Virtual Reality technology to simulate the effects of tunnel vision within controlled virtual environments. We find that the simulation accurately represents performance of RP patients in the context of group averages, but fails to fully replicate effects on gaze behavior.
{"title":"Simulating vision impairment in virtual reality: a comparison of visual task performance with real and simulated tunnel vision","authors":"Alexander Neugebauer, Nora Castner, Björn Severitt, Katarina Stingl, Iliya Ivanov, Siegfried Wahl","doi":"10.1007/s10055-024-00987-0","DOIUrl":"https://doi.org/10.1007/s10055-024-00987-0","url":null,"abstract":"<p>In this work, we explore the potential and limitations of simulating gaze-contingent tunnel vision conditions using Virtual Reality (VR) with built-in eye tracking technology. This approach promises an easy and accessible way of expanding study populations and test groups for visual training, visual aids, or accessibility evaluations. However, it is crucial to assess the validity and reliability of simulating these types of visual impairments and evaluate the extend to which participants with simulated tunnel vision can represent real patients. Two age-matched participant groups were acquired: The first group (n = 8, aged 20–60, average 49.1 ± 13.2) consisted of patients diagnosed with Retinitis pigmentosa (RP). The second group (n = 8, aged 27–59, average 46.5 ± 10.8) consisted of visually healthy participants with simulated tunnel vision. Both groups carried out different visual tasks in a virtual environment for 30 min per day over the course of four weeks. Task performances as well as gaze characteristics were evaluated in both groups over the course of the study. Using the ’two one-sided tests for equivalence’ method, the two groups were found to perform similar in all three visual tasks. Significant differences between groups were found in different aspects of their gaze behavior, though most of these aspects seem to converge over time. Our study evaluates the potential and limitations of using Virtual Reality technology to simulate the effects of tunnel vision within controlled virtual environments. We find that the simulation accurately represents performance of RP patients in the context of group averages, but fails to fully replicate effects on gaze behavior.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"12 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140569448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1007/s10055-024-00976-3
Yuqing Sun, Tianran Yuan, Yimin Wang, Quanping Sun, Zhiwei Hou, Juan Du
Aimed at limitations in the description and expression of three-dimensional (3D) physical information in two-dimentsional (2D) medical images, feature extraction and matching method based on the biomedical characteristics of skeletons is employed in this paper to map the 2D images of skeletons into a 3D digital model. Augmented reality technique is used to realize the interactive presentation of skeleton models. Main contents of this paper include: Firstly, a three-step reconstruction method is used to process the bone CT image data to obtain its three-dimensional surface model, and the corresponding 2D–3D bone library is established based on the identification index of the 2D image and the 3D model; then, a fast and accurate feature extraction and matching algorithm is developed to realize the recognition, extraction, and matching of 2D skeletal features, and determine the corresponding 3D skeleton model according to the matching result. Finally, based on the augmented reality technique, an interactive immersive presentation system is designed to achieve visual effects of the virtual human bone model superimposed and rendered in the world scenes, which improves the effectiveness of information expression and transmission, as well as the user's immersion and embodied experience.
{"title":"Augmented reality presentation system of skeleton image based on biomedical features","authors":"Yuqing Sun, Tianran Yuan, Yimin Wang, Quanping Sun, Zhiwei Hou, Juan Du","doi":"10.1007/s10055-024-00976-3","DOIUrl":"https://doi.org/10.1007/s10055-024-00976-3","url":null,"abstract":"<p>Aimed at limitations in the description and expression of three-dimensional (3D) physical information in two-dimentsional (2D) medical images, feature extraction and matching method based on the biomedical characteristics of skeletons is employed in this paper to map the 2D images of skeletons into a 3D digital model. Augmented reality technique is used to realize the interactive presentation of skeleton models. Main contents of this paper include: Firstly, a three-step reconstruction method is used to process the bone CT image data to obtain its three-dimensional surface model, and the corresponding 2D–3D bone library is established based on the identification index of the 2D image and the 3D model; then, a fast and accurate feature extraction and matching algorithm is developed to realize the recognition, extraction, and matching of 2D skeletal features, and determine the corresponding 3D skeleton model according to the matching result. Finally, based on the augmented reality technique, an interactive immersive presentation system is designed to achieve visual effects of the virtual human bone model superimposed and rendered in the world scenes, which improves the effectiveness of information expression and transmission, as well as the user's immersion and embodied experience.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"29 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140614034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.1007/s10055-024-00993-2
Arthur Maneuvrier
This study explores the effect of the experimenter’s gender/sex and its interaction with the participant’s gender/sex as potential contributors to the replicability crisis, particularly in the man-gendered domain of VR. 75 young men and women from Western France were randomly evaluated by either a man or a woman during a 13-min immersion in a first-person shooter game. Self-administered questionnaires were used to measure variables commonly assessed during VR experiments (sense of presence, cybersickness, video game experience, flow). MANOVAs, ANOVAs and post-hoc comparisons were used. Results indicate that men and women differ in their reports of cybersickness and video game experience when rated by men, whereas they report similar measures when rated by women. These findings are interpreted as consequences of the psychosocial stress triggered by the interaction between the two genders/sexes, as well as the gender conformity effect induced, particularly in women, by the presence of a man in a masculine domain. Corroborating this interpretation, the subjective measure of flow, which is not linked to video games and/or computers, does not seem to be affected by this experimental effect. Methodological precautions are highlighted, notably the brief systematic description of the experimenter, and future exploratory and confirmatory studies are outlined.