Pub Date : 2024-05-27DOI: 10.1007/s10055-024-01017-9
Joris Peereboom, Wilbert Tabone, Dimitra Dodou, Joost de Winter
Many collisions between pedestrians and cars are caused by poor visibility, such as occlusion by a parked vehicle. Augmented reality (AR) could help to prevent this problem, but it is unknown to what extent the augmented information needs to be embedded into the world. In this virtual reality experiment with a head-mounted display (HMD), 28 participants were exposed to AR designs, in a scenario where a vehicle approached from behind a parked vehicle. The experimental conditions included a head-locked live video feed of the occluded region, meaning it was fixed in a specific location within the view of the HMD (VideoHead), a world-locked video feed displayed across the street (VideoStreet), and two conformal diminished reality designs: a see-through display on the occluding vehicle (VideoSeeThrough) and a solution where the occluding vehicle has been made semi-transparent (TransparentVehicle). A Baseline condition without augmented information served as a reference. Additionally, the VideoHead and VideoStreet conditions were each tested with and without the addition of a guiding arrow indicating the location of the approaching vehicle. Participants performed 42 trials, 6 per condition, during which they had to hold a key when they felt safe to cross. The keypress percentages and responses from additional questionnaires showed that the diminished-reality TransparentVehicle and VideoSeeThrough designs came out most favourably, while the VideoHead solution caused some discomfort and dissatisfaction. An analysis of head yaw angle showed that VideoHead and VideoStreet caused divided attention between the screen and the approaching vehicle. The use of guiding arrows did not contribute demonstrable added value. AR designs with a high level of local embeddedness are beneficial for addressing occlusion problems when crossing. However, the head-locked solutions should not be immediately dismissed because, according to the literature, such solutions can serve tasks where a salient warning or instruction is beneficial.
{"title":"Head-locked, world-locked, or conformal diminished-reality? An examination of different AR solutions for pedestrian safety in occluded scenarios","authors":"Joris Peereboom, Wilbert Tabone, Dimitra Dodou, Joost de Winter","doi":"10.1007/s10055-024-01017-9","DOIUrl":"https://doi.org/10.1007/s10055-024-01017-9","url":null,"abstract":"<p>Many collisions between pedestrians and cars are caused by poor visibility, such as occlusion by a parked vehicle. Augmented reality (AR) could help to prevent this problem, but it is unknown to what extent the augmented information needs to be embedded into the world. In this virtual reality experiment with a head-mounted display (HMD), 28 participants were exposed to AR designs, in a scenario where a vehicle approached from behind a parked vehicle. The experimental conditions included a head-locked live video feed of the occluded region, meaning it was fixed in a specific location within the view of the HMD (<i>VideoHead</i>), a world-locked video feed displayed across the street (<i>VideoStreet</i>), and two conformal diminished reality designs: a see-through display on the occluding vehicle (<i>VideoSeeThrough</i>) and a solution where the occluding vehicle has been made semi-transparent (<i>TransparentVehicle</i>). A <i>Baseline</i> condition without augmented information served as a reference. Additionally, the <i>VideoHead</i> and <i>VideoStreet</i> conditions were each tested with and without the addition of a guiding arrow indicating the location of the approaching vehicle. Participants performed 42 trials, 6 per condition, during which they had to hold a key when they felt safe to cross. The keypress percentages and responses from additional questionnaires showed that the diminished-reality <i>TransparentVehicle</i> and <i>VideoSeeThrough</i> designs came out most favourably, while the <i>VideoHead</i> solution caused some discomfort and dissatisfaction. An analysis of head yaw angle showed that <i>VideoHead</i> and <i>VideoStreet</i> caused divided attention between the screen and the approaching vehicle. The use of guiding arrows did not contribute demonstrable added value. AR designs with a high level of local embeddedness are beneficial for addressing occlusion problems when crossing. However, the head-locked solutions should not be immediately dismissed because, according to the literature, such solutions can serve tasks where a salient warning or instruction is beneficial.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"7 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-25DOI: 10.1007/s10055-024-01014-y
Matevž Pesek, Nejc Hirci, Klara Žnideršič, Matija Marolt
This study analyzes the effect of using a virtual reality (VR) game as a complementary tool to improve users’ rhythmic performance and perception in a remote and self-learning environment. In recent years, remote learning has gained importance due to various everyday situations; however, the effects of using VR in such situations for individual and self-learning have yet to be evaluated. In music education, learning processes are usually heavily dependent on face-to-face communication with a teacher and are based on a formal or informal curriculum. The aim of this study is to investigate the potential of gamified VR learning and its influence on users’ rhythmic sensory and perceptual abilities. We developed a drum-playing game based on a tower defense scenario designed to improve four aspects of rhythmic perceptual skills in elementary school children with various levels of music learning experience. In this study, 14 elementary school children received Meta Quest 2 headsets for individual use in a 14-day individual training session. The results showed a significant increase in their rhythmical skills through an analysis of their rhythmic performance before and after the training sessions. In addition, the experience of playing the VR game and using the HMD setup was also assessed, highlighting some of the challenges of currently available affordable headsets for gamified learning scenarios.
{"title":"Enhancing music rhythmic perception and performance with a VR game","authors":"Matevž Pesek, Nejc Hirci, Klara Žnideršič, Matija Marolt","doi":"10.1007/s10055-024-01014-y","DOIUrl":"https://doi.org/10.1007/s10055-024-01014-y","url":null,"abstract":"<p>This study analyzes the effect of using a virtual reality (VR) game as a complementary tool to improve users’ rhythmic performance and perception in a remote and self-learning environment. In recent years, remote learning has gained importance due to various everyday situations; however, the effects of using VR in such situations for individual and self-learning have yet to be evaluated. In music education, learning processes are usually heavily dependent on face-to-face communication with a teacher and are based on a formal or informal curriculum. The aim of this study is to investigate the potential of gamified VR learning and its influence on users’ rhythmic sensory and perceptual abilities. We developed a drum-playing game based on a tower defense scenario designed to improve four aspects of rhythmic perceptual skills in elementary school children with various levels of music learning experience. In this study, 14 elementary school children received Meta Quest 2 headsets for individual use in a 14-day individual training session. The results showed a significant increase in their rhythmical skills through an analysis of their rhythmic performance before and after the training sessions. In addition, the experience of playing the VR game and using the HMD setup was also assessed, highlighting some of the challenges of currently available affordable headsets for gamified learning scenarios.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"58 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141145743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-25DOI: 10.1007/s10055-024-01010-2
D. A. Pérez-Ferrara, G. Y. Flores-Medina, E. Landa-Ramírez, D. J. González-Sánchez, J. A. Luna-Padilla, A. L. Sosa-Millán, A. Mondragón-Maya
To date, many interventions for social cognition have been developed. Nevertheless, the use of social cognition training with virtual reality (SCT-VR) in schizophrenia is a recent field of study. Therefore, a scoping review is a suitable method to examine the extent of existing literature, the characteristics of the studies, and the SCT-VR. Additionally, it allows us to summarize findings from a heterogeneous body of knowledge and identify gaps in the literature favoring the planning and conduct of future research. The aim of this review was to explore and describe the characteristics of SCT-VR in schizophrenia. The searched databases were MEDLINE, PsycInfo, Web of Science, and CINAHL. This scoping review considered experimental, quasi-experimental, analytical observational and descriptive observational study designs. The full text of selected citations was assessed by two independent reviewers. Data were extracted from papers included in the scoping review by two independent reviewers. We identified 1,407 records. A total of twelve studies were included for analyses. Study designs were variable, most research was proof-of-concept or pilot studies. Most SCT-VR were immersive and targeted interventions. Number of sessions ranged from 9 to 16, and the duration of each session ranged from 45 to 120 min. Some studies reported a significant improvement in emotion recognition and/or theory of mind. However, SCT-VR is a recent research field in which the heterogeneity in methodological approaches is evident and has prevented the reaching of robust conclusions. Preliminary evidence has shown that SCT-VR could represent a feasible and promising approach for improving SC deficits in schizophrenia.
{"title":"Social cognition training using virtual reality for people with schizophrenia: a scoping review","authors":"D. A. Pérez-Ferrara, G. Y. Flores-Medina, E. Landa-Ramírez, D. J. González-Sánchez, J. A. Luna-Padilla, A. L. Sosa-Millán, A. Mondragón-Maya","doi":"10.1007/s10055-024-01010-2","DOIUrl":"https://doi.org/10.1007/s10055-024-01010-2","url":null,"abstract":"<p>To date, many interventions for social cognition have been developed. Nevertheless, the use of social cognition training with virtual reality (SCT-VR) in schizophrenia is a recent field of study. Therefore, a scoping review is a suitable method to examine the extent of existing literature, the characteristics of the studies, and the SCT-VR. Additionally, it allows us to summarize findings from a heterogeneous body of knowledge and identify gaps in the literature favoring the planning and conduct of future research. The aim of this review was to explore and describe the characteristics of SCT-VR in schizophrenia. The searched databases were MEDLINE, PsycInfo, Web of Science, and CINAHL. This scoping review considered experimental, quasi-experimental, analytical observational and descriptive observational study designs. The full text of selected citations was assessed by two independent reviewers. Data were extracted from papers included in the scoping review by two independent reviewers. We identified 1,407 records. A total of twelve studies were included for analyses. Study designs were variable, most research was proof-of-concept or pilot studies. Most SCT-VR were immersive and targeted interventions. Number of sessions ranged from 9 to 16, and the duration of each session ranged from 45 to 120 min. Some studies reported a significant improvement in emotion recognition and/or theory of mind. However, SCT-VR is a recent research field in which the heterogeneity in methodological approaches is evident and has prevented the reaching of robust conclusions. Preliminary evidence has shown that SCT-VR could represent a feasible and promising approach for improving SC deficits in schizophrenia.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"25 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141145869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a methodology tailored to capture, post-process, and replicate audio-visual data of outdoor environments (urban or natural) for VR experiments carried out within a controlled laboratory environment. The methodology consists of 360(^circ) video and higher order ambisonic (HOA) field recordings and subsequent calibrated spatial sound reproduction with a spherical loudspeaker array and video played back via a head-mounted display using a game engine and a graphical user interface for a perceptual experimental questionnaire. Attention was given to the equalisation and calibration of the ambisonic microphone and to the design of different ambisonic decoders. A listening experiment was conducted to evaluate four different decoders (one 2D first-order ambisonic decoder and three 3D third-order decoders) by asking participants to rate the relative (perceived) realism of recorded outdoor soundscapes reproduced with these decoders. The results showed that the third-order decoders were ranked as more realistic.
{"title":"Replicating outdoor environments using VR and ambisonics: a methodology for accurate audio-visual recording, processing and reproduction","authors":"Fotis Georgiou, Claudia Kawai, Beat Schäffer, Reto Pieren","doi":"10.1007/s10055-024-01003-1","DOIUrl":"https://doi.org/10.1007/s10055-024-01003-1","url":null,"abstract":"<p>This paper introduces a methodology tailored to capture, post-process, and replicate audio-visual data of outdoor environments (urban or natural) for VR experiments carried out within a controlled laboratory environment. The methodology consists of 360<span>(^circ)</span> video and higher order ambisonic (HOA) field recordings and subsequent calibrated spatial sound reproduction with a spherical loudspeaker array and video played back via a head-mounted display using a game engine and a graphical user interface for a perceptual experimental questionnaire. Attention was given to the equalisation and calibration of the ambisonic microphone and to the design of different ambisonic decoders. A listening experiment was conducted to evaluate four different decoders (one 2D first-order ambisonic decoder and three 3D third-order decoders) by asking participants to rate the relative (perceived) realism of recorded outdoor soundscapes reproduced with these decoders. The results showed that the third-order decoders were ranked as more realistic.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"4 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141059830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.1007/s10055-024-01007-x
Sara Vlahovic, Lea Skorin-Kapov, Mirko Suznjevic, Nina Pavlin-Bernardic
Uncomfortable sensations that arise during virtual reality (VR) use have always been among the industry’s biggest challenges. While certain VR-induced effects, such as cybersickness, have garnered a lot of interest from academia and industry over the years, others have been overlooked and underresearched. Recently, the research community has been calling for more holistic approaches to studying the issue of VR discomfort. Focusing on active VR gaming, our article presents the results of two user studies with a total of 40 participants. Incorporating state-of-the-art VR-specific measures (the Simulation Task Load Index—SIM-TLX, Cybersickness Questionnaire—CSQ, Virtual Reality Sickness Questionnaire—VRSQ) into our methodology, we examined workload, musculoskeletal discomfort, device-related discomfort, cybersickness, and changes in reaction time following VR gameplay. Using a set of six different active VR games (three per study), we attempted to quantify and compare the prevalence and intensity of VR-induced symptoms across different genres and game mechanics. Varying between individuals, as well as games, the diverse symptoms reported in our study highlight the importance of including measures of VR-induced effects other than cybersickness into VR gaming user studies, while questioning the suitability of the Simulator Sickness Questionnaire (SSQ)—arguably the most prevalent measure of VR discomfort in the field—for use with active VR gaming scenarios.
{"title":"Not just cybersickness: short-term effects of popular VR game mechanics on physical discomfort and reaction time","authors":"Sara Vlahovic, Lea Skorin-Kapov, Mirko Suznjevic, Nina Pavlin-Bernardic","doi":"10.1007/s10055-024-01007-x","DOIUrl":"https://doi.org/10.1007/s10055-024-01007-x","url":null,"abstract":"<p>Uncomfortable sensations that arise during virtual reality (VR) use have always been among the industry’s biggest challenges. While certain VR-induced effects, such as cybersickness, have garnered a lot of interest from academia and industry over the years, others have been overlooked and underresearched. Recently, the research community has been calling for more holistic approaches to studying the issue of VR discomfort. Focusing on active VR gaming, our article presents the results of two user studies with a total of 40 participants. Incorporating state-of-the-art VR-specific measures (the Simulation Task Load Index—SIM-TLX, Cybersickness Questionnaire—CSQ, Virtual Reality Sickness Questionnaire—VRSQ) into our methodology, we examined workload, musculoskeletal discomfort, device-related discomfort, cybersickness, and changes in reaction time following VR gameplay. Using a set of six different active VR games (three per study), we attempted to quantify and compare the prevalence and intensity of VR-induced symptoms across different genres and game mechanics. Varying between individuals, as well as games, the diverse symptoms reported in our study highlight the importance of including measures of VR-induced effects other than cybersickness into VR gaming user studies, while questioning the suitability of the Simulator Sickness Questionnaire (SSQ)—arguably the most prevalent measure of VR discomfort in the field—for use with active VR gaming scenarios.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"304 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140939691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-06DOI: 10.1007/s10055-024-00989-y
Rodrigo Lima, Alice Chirico, Rui Varandas, Hugo Gamboa, Andrea Gaggioli, Sergi Bermúdez i Badia
Affective computing has been widely used to detect and recognize emotional states. The main goal of this study was to detect emotional states using machine learning algorithms automatically. The experimental procedure involved eliciting emotional states using film clips in an immersive and non-immersive virtual reality setup. The participants’ physiological signals were recorded and analyzed to train machine learning models to recognize users’ emotional states. Furthermore, two subjective ratings emotional scales were provided to rate each emotional film clip. Results showed no significant differences between presenting the stimuli in the two degrees of immersion. Regarding emotion classification, it emerged that for both physiological signals and subjective ratings, user-dependent models have a better performance when compared to user-independent models. We obtained an average accuracy of 69.29 ± 11.41% and 71.00 ± 7.95% for the subjective ratings and physiological signals, respectively. On the other hand, using user-independent models, the accuracy we obtained was 54.0 ± 17.2% and 24.9 ± 4.0%, respectively. We interpreted these data as the result of high inter-subject variability among participants, suggesting the need for user-dependent classification models. In future works, we intend to develop new classification algorithms and transfer them to real-time implementation. This will make it possible to adapt to a virtual reality environment in real-time, according to the user’s emotional state.
{"title":"Multimodal emotion classification using machine learning in immersive and non-immersive virtual reality","authors":"Rodrigo Lima, Alice Chirico, Rui Varandas, Hugo Gamboa, Andrea Gaggioli, Sergi Bermúdez i Badia","doi":"10.1007/s10055-024-00989-y","DOIUrl":"https://doi.org/10.1007/s10055-024-00989-y","url":null,"abstract":"<p>Affective computing has been widely used to detect and recognize emotional states. The main goal of this study was to detect emotional states using machine learning algorithms automatically. The experimental procedure involved eliciting emotional states using film clips in an immersive and non-immersive virtual reality setup. The participants’ physiological signals were recorded and analyzed to train machine learning models to recognize users’ emotional states. Furthermore, two subjective ratings emotional scales were provided to rate each emotional film clip. Results showed no significant differences between presenting the stimuli in the two degrees of immersion. Regarding emotion classification, it emerged that for both physiological signals and subjective ratings, user-dependent models have a better performance when compared to user-independent models. We obtained an average accuracy of 69.29 ± 11.41% and 71.00 ± 7.95% for the subjective ratings and physiological signals, respectively. On the other hand, using user-independent models, the accuracy we obtained was 54.0 ± 17.2% and 24.9 ± 4.0%, respectively. We interpreted these data as the result of high inter-subject variability among participants, suggesting the need for user-dependent classification models. In future works, we intend to develop new classification algorithms and transfer them to real-time implementation. This will make it possible to adapt to a virtual reality environment in real-time, according to the user’s emotional state.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"107 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-04DOI: 10.1007/s10055-024-01004-0
Nick Michiels, Lode Jorissen, Jeroen Put, Jori Liesenborgs, Isjtar Vandebroeck, Eric Joris, Frank Van Reeth
Extended reality (XR) experiences are on the verge of becoming widely adopted in diverse application domains. An essential part of the technology is accurate tracking and localization of the headset to create an immersive experience. A subset of the applications require perfect co-location between the real and the virtual world, where virtual objects are aligned with real-world counterparts. Current headsets support co-location for small areas, but suffer from drift when scaling up to larger ones such as buildings or factories. This paper proposes tools and solutions for this challenge by splitting up the simultaneous localization and mapping (SLAM) into separate mapping and localization stages. In the pre-processing stage, a feature map is built for the entire tracking area. A global optimizer is applied to correct the deformations caused by drift, guided by a sparse set of ground truth markers in the point cloud of a laser scan. Optionally, further refinement is applied by matching features between the ground truth keyframe images and their rendered-out SLAM estimates of the point cloud. In the second, real-time stage, the rectified feature map is used to perform localization and sensor fusion between the global tracking and the headset. The results show that the approach achieves robust co-location between the virtual and the real 3D environment for large and complex tracking environments.
扩展现实(XR)体验即将被广泛应用于各种应用领域。该技术的一个重要部分是对耳机进行精确跟踪和定位,以创造身临其境的体验。其中一部分应用要求在现实世界和虚拟世界之间实现完美的协同定位,即虚拟物体与现实世界中的对应物体保持一致。目前的头显支持小范围内的协同定位,但当扩展到建筑物或工厂等更大范围时,就会出现偏移。本文通过将同步定位和映射(SLAM)分成独立的映射和定位阶段,提出了应对这一挑战的工具和解决方案。在预处理阶段,为整个跟踪区域绘制特征图。在激光扫描点云中稀疏的地面实况标记集的引导下,应用全局优化器修正漂移引起的变形。此外,还可通过匹配地面实况关键帧图像与其渲染出的点云 SLAM 估计值之间的特征来进一步完善。在第二阶段,即实时阶段,校正后的特征图用于在全局跟踪和耳机之间执行定位和传感器融合。结果表明,该方法可在大型复杂跟踪环境中实现虚拟和真实三维环境之间的稳健协同定位。
{"title":"Tracking and co-location of global point clouds for large-area indoor environments","authors":"Nick Michiels, Lode Jorissen, Jeroen Put, Jori Liesenborgs, Isjtar Vandebroeck, Eric Joris, Frank Van Reeth","doi":"10.1007/s10055-024-01004-0","DOIUrl":"https://doi.org/10.1007/s10055-024-01004-0","url":null,"abstract":"<p>Extended reality (XR) experiences are on the verge of becoming widely adopted in diverse application domains. An essential part of the technology is accurate tracking and localization of the headset to create an immersive experience. A subset of the applications require perfect co-location between the real and the virtual world, where virtual objects are aligned with real-world counterparts. Current headsets support co-location for small areas, but suffer from drift when scaling up to larger ones such as buildings or factories. This paper proposes tools and solutions for this challenge by splitting up the simultaneous localization and mapping (SLAM) into separate mapping and localization stages. In the pre-processing stage, a feature map is built for the entire tracking area. A global optimizer is applied to correct the deformations caused by drift, guided by a sparse set of ground truth markers in the point cloud of a laser scan. Optionally, further refinement is applied by matching features between the ground truth keyframe images and their rendered-out SLAM estimates of the point cloud. In the second, real-time stage, the rectified feature map is used to perform localization and sensor fusion between the global tracking and the headset. The results show that the approach achieves robust co-location between the virtual and the real 3D environment for large and complex tracking environments.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"18 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1007/s10055-024-00998-x
Fotios Spyridonis, Damon Daylamani-Zad, James Nightingale
Speech anxiety, or Glossophobia, currently affects approximately 75% of the population with potentially severe negative effects on those with this condition. There are several treatments currently available with research showing that the use of Virtual Reality (VR) as a non-pharmacologic treatment can have positive effects on individuals suffering from such social phobias. However, there is a significant lack of treatments currently available for speech anxiety, even though such a large number of the population are affected by it. In this paper, we aim to contribute to efforts to improve the effects of speech anxiety through a VR intervention. Our VR solution was designed following the Exposure Therapy approach for treating social anxiety disorders. The evaluation of this work was twofold: A. to assess the ability of our solution to positively change participants’ perception of factors related to non-verbal communication contributing to anxiety toward public speaking, and B. to determine whether it is able to induce a sense of presence. We carried out an empirical evaluation study that measured participants’ self-reported anxiety level towards public speaking using the Personal Report of Public Speaking Anxiety and their perceived sense of presence using the iGroup Presence Questionnaire. Our results demonstrate the potential of VR Exposure Therapy solutions to assist towards positively changing perception of factors related to non-verbal communication skills that contribute to increasing public speaking anxiety for participants suffering from self-reported speech anxiety symptoms. Our findings are of wider importance as they contribute to ongoing efforts to improve social anxiety-related phobias.
演讲焦虑症(或称 "语言恐惧症")目前影响着大约 75% 的人口,对患有这种疾病的人可能会产生严重的负面影响。目前有几种治疗方法,研究表明,使用虚拟现实(VR)作为非药物治疗方法可以对患有这种社交恐惧症的人产生积极影响。然而,尽管有如此多的人受到言语焦虑症的影响,但目前仍严重缺乏针对言语焦虑症的治疗方法。在本文中,我们希望通过 VR 干预来改善言语焦虑症的影响。我们的 VR 解决方案是按照治疗社交焦虑症的暴露疗法方法设计的。这项工作的评估有两个方面:A. 评估我们的解决方案是否能够积极改变参与者对造成公开演讲焦虑的非语言交流相关因素的看法,以及 B. 确定它是否能够诱发临场感。我们开展了一项实证评估研究,使用 "个人公开演讲焦虑报告"(Personal Report of Public Speaking Anxiety)测量参与者自我报告的公开演讲焦虑水平,并使用 "iGroup 临场感问卷"(iGroup Presence Questionnaire)测量他们感知到的临场感。我们的研究结果表明,VR 暴露疗法方案可以帮助患有自述演讲焦虑症状的参与者积极改变对非语言沟通技巧相关因素的认知,而这些因素会增加他们的公开演讲焦虑。我们的研究结果具有更广泛的意义,因为它们有助于改善与社交焦虑相关的恐惧症。
{"title":"PublicVR: a virtual reality exposure therapy intervention for adults with speech anxiety","authors":"Fotios Spyridonis, Damon Daylamani-Zad, James Nightingale","doi":"10.1007/s10055-024-00998-x","DOIUrl":"https://doi.org/10.1007/s10055-024-00998-x","url":null,"abstract":"<p>Speech anxiety, or Glossophobia, currently affects approximately 75% of the population with potentially severe negative effects on those with this condition. There are several treatments currently available with research showing that the use of Virtual Reality (VR) as a non-pharmacologic treatment can have positive effects on individuals suffering from such social phobias. However, there is a significant lack of treatments currently available for speech anxiety, even though such a large number of the population are affected by it. In this paper, we aim to contribute to efforts to improve the effects of speech anxiety through a VR intervention. Our VR solution was designed following the Exposure Therapy approach for treating social anxiety disorders. The evaluation of this work was twofold: A. to assess the ability of our solution to positively change participants’ perception of factors related to non-verbal communication contributing to anxiety toward public speaking, and B. to determine whether it is able to induce a sense of presence. We carried out an empirical evaluation study that measured participants’ self-reported anxiety level towards public speaking using the Personal Report of Public Speaking Anxiety and their perceived sense of presence using the iGroup Presence Questionnaire. Our results demonstrate the potential of VR Exposure Therapy solutions to assist towards positively changing perception of factors related to non-verbal communication skills that contribute to increasing public speaking anxiety for participants suffering from self-reported speech anxiety symptoms. Our findings are of wider importance as they contribute to ongoing efforts to improve social anxiety-related phobias.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"49 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140841121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-26DOI: 10.1007/s10055-024-00997-y
Jieun Lee, Seokhyun Hwang, Kyunghwan Kim, SeungJun Kim
In virtual reality, redirected walking (RDW) enables users to stay within the tracking area while feeling that they are traveling in a virtual space that is larger than the physical space. RDW uses a visual attractor to the user’s sight and scene manipulation for intermittent reorientation. However, repeated usage can hinder the virtual world immersion and weaken the reorientation performance. In this study, we propose using sounds and smells as alternative stimuli to draw the user’s attention implicitly and sustain the attractor’s performance for intermittent reorientation. To achieve this, we integrated visual, auditory, and olfactory attractors into an all-in-one stimulation system. Experiments revealed that the auditory attractor caused the fastest reorientation, the olfactory attractor induced the widest angular difference, and the attractor with the combined auditory and olfactory stimuli induced the largest angular speed, keeping users from noticing the manipulation. The findings demonstrate the potential of nonvisual attractors to reorient users in situations requiring intermittent reorientation.
{"title":"Evaluation of visual, auditory, and olfactory stimulus-based attractors for intermittent reorientation in virtual reality locomotion","authors":"Jieun Lee, Seokhyun Hwang, Kyunghwan Kim, SeungJun Kim","doi":"10.1007/s10055-024-00997-y","DOIUrl":"https://doi.org/10.1007/s10055-024-00997-y","url":null,"abstract":"<p>In virtual reality, redirected walking (RDW) enables users to stay within the tracking area while feeling that they are traveling in a virtual space that is larger than the physical space. RDW uses a visual attractor to the user’s sight and scene manipulation for intermittent reorientation. However, repeated usage can hinder the virtual world immersion and weaken the reorientation performance. In this study, we propose using sounds and smells as alternative stimuli to draw the user’s attention implicitly and sustain the attractor’s performance for intermittent reorientation. To achieve this, we integrated visual, auditory, and olfactory attractors into an all-in-one stimulation system. Experiments revealed that the auditory attractor caused the fastest reorientation, the olfactory attractor induced the widest angular difference, and the attractor with the combined auditory and olfactory stimuli induced the largest angular speed, keeping users from noticing the manipulation. The findings demonstrate the potential of nonvisual attractors to reorient users in situations requiring intermittent reorientation.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"36 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140806506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1007/s10055-024-00987-0
Alexander Neugebauer, Nora Castner, Björn Severitt, Katarina Stingl, Iliya Ivanov, Siegfried Wahl
In this work, we explore the potential and limitations of simulating gaze-contingent tunnel vision conditions using Virtual Reality (VR) with built-in eye tracking technology. This approach promises an easy and accessible way of expanding study populations and test groups for visual training, visual aids, or accessibility evaluations. However, it is crucial to assess the validity and reliability of simulating these types of visual impairments and evaluate the extend to which participants with simulated tunnel vision can represent real patients. Two age-matched participant groups were acquired: The first group (n = 8, aged 20–60, average 49.1 ± 13.2) consisted of patients diagnosed with Retinitis pigmentosa (RP). The second group (n = 8, aged 27–59, average 46.5 ± 10.8) consisted of visually healthy participants with simulated tunnel vision. Both groups carried out different visual tasks in a virtual environment for 30 min per day over the course of four weeks. Task performances as well as gaze characteristics were evaluated in both groups over the course of the study. Using the ’two one-sided tests for equivalence’ method, the two groups were found to perform similar in all three visual tasks. Significant differences between groups were found in different aspects of their gaze behavior, though most of these aspects seem to converge over time. Our study evaluates the potential and limitations of using Virtual Reality technology to simulate the effects of tunnel vision within controlled virtual environments. We find that the simulation accurately represents performance of RP patients in the context of group averages, but fails to fully replicate effects on gaze behavior.
{"title":"Simulating vision impairment in virtual reality: a comparison of visual task performance with real and simulated tunnel vision","authors":"Alexander Neugebauer, Nora Castner, Björn Severitt, Katarina Stingl, Iliya Ivanov, Siegfried Wahl","doi":"10.1007/s10055-024-00987-0","DOIUrl":"https://doi.org/10.1007/s10055-024-00987-0","url":null,"abstract":"<p>In this work, we explore the potential and limitations of simulating gaze-contingent tunnel vision conditions using Virtual Reality (VR) with built-in eye tracking technology. This approach promises an easy and accessible way of expanding study populations and test groups for visual training, visual aids, or accessibility evaluations. However, it is crucial to assess the validity and reliability of simulating these types of visual impairments and evaluate the extend to which participants with simulated tunnel vision can represent real patients. Two age-matched participant groups were acquired: The first group (n = 8, aged 20–60, average 49.1 ± 13.2) consisted of patients diagnosed with Retinitis pigmentosa (RP). The second group (n = 8, aged 27–59, average 46.5 ± 10.8) consisted of visually healthy participants with simulated tunnel vision. Both groups carried out different visual tasks in a virtual environment for 30 min per day over the course of four weeks. Task performances as well as gaze characteristics were evaluated in both groups over the course of the study. Using the ’two one-sided tests for equivalence’ method, the two groups were found to perform similar in all three visual tasks. Significant differences between groups were found in different aspects of their gaze behavior, though most of these aspects seem to converge over time. Our study evaluates the potential and limitations of using Virtual Reality technology to simulate the effects of tunnel vision within controlled virtual environments. We find that the simulation accurately represents performance of RP patients in the context of group averages, but fails to fully replicate effects on gaze behavior.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"12 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140569448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}