首页 > 最新文献

Virtual Reality最新文献

英文 中文
Head-locked, world-locked, or conformal diminished-reality? An examination of different AR solutions for pedestrian safety in occluded scenarios 头部锁定、世界锁定还是共形减弱现实?针对闭塞场景下行人安全的不同 AR 解决方案的研究
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-05-27 DOI: 10.1007/s10055-024-01017-9
Joris Peereboom, Wilbert Tabone, Dimitra Dodou, Joost de Winter

Many collisions between pedestrians and cars are caused by poor visibility, such as occlusion by a parked vehicle. Augmented reality (AR) could help to prevent this problem, but it is unknown to what extent the augmented information needs to be embedded into the world. In this virtual reality experiment with a head-mounted display (HMD), 28 participants were exposed to AR designs, in a scenario where a vehicle approached from behind a parked vehicle. The experimental conditions included a head-locked live video feed of the occluded region, meaning it was fixed in a specific location within the view of the HMD (VideoHead), a world-locked video feed displayed across the street (VideoStreet), and two conformal diminished reality designs: a see-through display on the occluding vehicle (VideoSeeThrough) and a solution where the occluding vehicle has been made semi-transparent (TransparentVehicle). A Baseline condition without augmented information served as a reference. Additionally, the VideoHead and VideoStreet conditions were each tested with and without the addition of a guiding arrow indicating the location of the approaching vehicle. Participants performed 42 trials, 6 per condition, during which they had to hold a key when they felt safe to cross. The keypress percentages and responses from additional questionnaires showed that the diminished-reality TransparentVehicle and VideoSeeThrough designs came out most favourably, while the VideoHead solution caused some discomfort and dissatisfaction. An analysis of head yaw angle showed that VideoHead and VideoStreet caused divided attention between the screen and the approaching vehicle. The use of guiding arrows did not contribute demonstrable added value. AR designs with a high level of local embeddedness are beneficial for addressing occlusion problems when crossing. However, the head-locked solutions should not be immediately dismissed because, according to the literature, such solutions can serve tasks where a salient warning or instruction is beneficial.

行人与汽车之间的许多碰撞事故都是由于能见度低造成的,例如被停在路边的车辆遮挡。增强现实(AR)可以帮助避免这一问题,但增强信息需要在多大程度上嵌入到世界中还不得而知。在这项使用头戴式显示器(HMD)的虚拟现实实验中,28 名参与者在车辆从停放车辆后方驶来的场景中接触了 AR 设计。实验条件包括锁定头部的被遮挡区域实时视频源,即固定在 HMD 视图中的特定位置(VideoHead)、显示街道对面的全球锁定视频源(VideoStreet)以及两种保形减弱现实设计:被遮挡车辆上的透视显示(VideoSeeThrough)和将被遮挡车辆做成半透明的解决方案(TransparentVehicle)。没有增强信息的基线条件作为参考。此外,在视频头(VideoHead)和视频街(VideoStreet)条件下,分别测试了是否添加了指示接近车辆位置的引导箭头。参与者共进行了 42 次测试,每次 6 个条件,在此期间,他们必须在感觉安全时按住一个键才能通过马路。按键百分比和附加调查问卷的反馈显示,现实感减弱的 "透明车辆 "和 "视频透视 "设计最受欢迎,而 "视频头 "解决方案则引起了一些不适和不满。对头部偏转角度的分析表明,VideoHead 和 VideoStreet 会使注意力分散到屏幕和驶来的车辆上。引导箭头的使用并没有带来明显的附加值。具有高度局部嵌入性的增强现实技术设计有利于解决过马路时的遮挡问题。不过,也不能立即否定锁定头部的解决方案,因为根据文献资料,这种解决方案也可用于需要突出警告或指示的任务。
{"title":"Head-locked, world-locked, or conformal diminished-reality? An examination of different AR solutions for pedestrian safety in occluded scenarios","authors":"Joris Peereboom, Wilbert Tabone, Dimitra Dodou, Joost de Winter","doi":"10.1007/s10055-024-01017-9","DOIUrl":"https://doi.org/10.1007/s10055-024-01017-9","url":null,"abstract":"<p>Many collisions between pedestrians and cars are caused by poor visibility, such as occlusion by a parked vehicle. Augmented reality (AR) could help to prevent this problem, but it is unknown to what extent the augmented information needs to be embedded into the world. In this virtual reality experiment with a head-mounted display (HMD), 28 participants were exposed to AR designs, in a scenario where a vehicle approached from behind a parked vehicle. The experimental conditions included a head-locked live video feed of the occluded region, meaning it was fixed in a specific location within the view of the HMD (<i>VideoHead</i>), a world-locked video feed displayed across the street (<i>VideoStreet</i>), and two conformal diminished reality designs: a see-through display on the occluding vehicle (<i>VideoSeeThrough</i>) and a solution where the occluding vehicle has been made semi-transparent (<i>TransparentVehicle</i>). A <i>Baseline</i> condition without augmented information served as a reference. Additionally, the <i>VideoHead</i> and <i>VideoStreet</i> conditions were each tested with and without the addition of a guiding arrow indicating the location of the approaching vehicle. Participants performed 42 trials, 6 per condition, during which they had to hold a key when they felt safe to cross. The keypress percentages and responses from additional questionnaires showed that the diminished-reality <i>TransparentVehicle</i> and <i>VideoSeeThrough</i> designs came out most favourably, while the <i>VideoHead</i> solution caused some discomfort and dissatisfaction. An analysis of head yaw angle showed that <i>VideoHead</i> and <i>VideoStreet</i> caused divided attention between the screen and the approaching vehicle. The use of guiding arrows did not contribute demonstrable added value. AR designs with a high level of local embeddedness are beneficial for addressing occlusion problems when crossing. However, the head-locked solutions should not be immediately dismissed because, according to the literature, such solutions can serve tasks where a salient warning or instruction is beneficial.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"7 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing music rhythmic perception and performance with a VR game 通过 VR 游戏增强音乐节奏感和表现力
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-05-25 DOI: 10.1007/s10055-024-01014-y
Matevž Pesek, Nejc Hirci, Klara Žnideršič, Matija Marolt

This study analyzes the effect of using a virtual reality (VR) game as a complementary tool to improve users’ rhythmic performance and perception in a remote and self-learning environment. In recent years, remote learning has gained importance due to various everyday situations; however, the effects of using VR in such situations for individual and self-learning have yet to be evaluated. In music education, learning processes are usually heavily dependent on face-to-face communication with a teacher and are based on a formal or informal curriculum. The aim of this study is to investigate the potential of gamified VR learning and its influence on users’ rhythmic sensory and perceptual abilities. We developed a drum-playing game based on a tower defense scenario designed to improve four aspects of rhythmic perceptual skills in elementary school children with various levels of music learning experience. In this study, 14 elementary school children received Meta Quest 2 headsets for individual use in a 14-day individual training session. The results showed a significant increase in their rhythmical skills through an analysis of their rhythmic performance before and after the training sessions. In addition, the experience of playing the VR game and using the HMD setup was also assessed, highlighting some of the challenges of currently available affordable headsets for gamified learning scenarios.

本研究分析了在远程自学环境中使用虚拟现实(VR)游戏作为辅助工具来提高用户节奏表现和感知能力的效果。近年来,远程学习在各种日常生活场景中日益受到重视;然而,在此类场景中使用虚拟现实技术进行个人学习和自我学习的效果尚待评估。在音乐教育中,学习过程通常在很大程度上依赖于与教师的面对面交流,并以正式或非正式课程为基础。本研究旨在探讨游戏化 VR 学习的潜力及其对用户节奏感和知觉能力的影响。我们开发了一款基于塔防场景的击鼓游戏,旨在提高具有不同音乐学习经验的小学生四个方面的节奏感知能力。在这项研究中,14 名小学生接受了 Meta Quest 2 耳机,在为期 14 天的个人训练中单独使用。结果显示,通过对训练前后的节奏表现进行分析,他们的节奏技能有了明显提高。此外,还对玩 VR 游戏和使用 HMD 设置的体验进行了评估,凸显了目前可负担得起的头显在游戏化学习场景中面临的一些挑战。
{"title":"Enhancing music rhythmic perception and performance with a VR game","authors":"Matevž Pesek, Nejc Hirci, Klara Žnideršič, Matija Marolt","doi":"10.1007/s10055-024-01014-y","DOIUrl":"https://doi.org/10.1007/s10055-024-01014-y","url":null,"abstract":"<p>This study analyzes the effect of using a virtual reality (VR) game as a complementary tool to improve users’ rhythmic performance and perception in a remote and self-learning environment. In recent years, remote learning has gained importance due to various everyday situations; however, the effects of using VR in such situations for individual and self-learning have yet to be evaluated. In music education, learning processes are usually heavily dependent on face-to-face communication with a teacher and are based on a formal or informal curriculum. The aim of this study is to investigate the potential of gamified VR learning and its influence on users’ rhythmic sensory and perceptual abilities. We developed a drum-playing game based on a tower defense scenario designed to improve four aspects of rhythmic perceptual skills in elementary school children with various levels of music learning experience. In this study, 14 elementary school children received Meta Quest 2 headsets for individual use in a 14-day individual training session. The results showed a significant increase in their rhythmical skills through an analysis of their rhythmic performance before and after the training sessions. In addition, the experience of playing the VR game and using the HMD setup was also assessed, highlighting some of the challenges of currently available affordable headsets for gamified learning scenarios.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"58 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141145743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social cognition training using virtual reality for people with schizophrenia: a scoping review 利用虚拟现实技术对精神分裂症患者进行社会认知训练:范围界定综述
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-05-25 DOI: 10.1007/s10055-024-01010-2
D. A. Pérez-Ferrara, G. Y. Flores-Medina, E. Landa-Ramírez, D. J. González-Sánchez, J. A. Luna-Padilla, A. L. Sosa-Millán, A. Mondragón-Maya

To date, many interventions for social cognition have been developed. Nevertheless, the use of social cognition training with virtual reality (SCT-VR) in schizophrenia is a recent field of study. Therefore, a scoping review is a suitable method to examine the extent of existing literature, the characteristics of the studies, and the SCT-VR. Additionally, it allows us to summarize findings from a heterogeneous body of knowledge and identify gaps in the literature favoring the planning and conduct of future research. The aim of this review was to explore and describe the characteristics of SCT-VR in schizophrenia. The searched databases were MEDLINE, PsycInfo, Web of Science, and CINAHL. This scoping review considered experimental, quasi-experimental, analytical observational and descriptive observational study designs. The full text of selected citations was assessed by two independent reviewers. Data were extracted from papers included in the scoping review by two independent reviewers. We identified 1,407 records. A total of twelve studies were included for analyses. Study designs were variable, most research was proof-of-concept or pilot studies. Most SCT-VR were immersive and targeted interventions. Number of sessions ranged from 9 to 16, and the duration of each session ranged from 45 to 120 min. Some studies reported a significant improvement in emotion recognition and/or theory of mind. However, SCT-VR is a recent research field in which the heterogeneity in methodological approaches is evident and has prevented the reaching of robust conclusions. Preliminary evidence has shown that SCT-VR could represent a feasible and promising approach for improving SC deficits in schizophrenia.

迄今为止,已经开发了许多针对社会认知的干预措施。然而,在精神分裂症中使用虚拟现实技术(SCT-VR)进行社会认知训练还是一个最新的研究领域。因此,范围综述是考察现有文献范围、研究特点和 SCT-VR 的合适方法。此外,它还能让我们总结来自不同知识体系的研究结果,并找出文献中的不足之处,以利于规划和开展未来的研究。本综述旨在探讨和描述精神分裂症 SCT-VR 的特征。检索的数据库包括 MEDLINE、PsycInfo、Web of Science 和 CINAHL。本范围界定综述考虑了实验性、准实验性、分析观察性和描述观察性研究设计。两位独立审稿人对所选引文的全文进行了评估。两名独立审稿人从纳入范围界定综述的论文中提取数据。我们确定了 1,407 条记录。共有 12 项研究被纳入分析。研究设计各不相同,大多数研究都是概念验证或试点研究。大多数 SCT-VR 都是沉浸式和有针对性的干预措施。治疗次数从 9 次到 16 次不等,每次治疗时间从 45 分钟到 120 分钟不等。一些研究报告称,情绪识别和/或心智理论得到了明显改善。然而,SCT-VR 是一个新近的研究领域,其研究方法的异质性显而易见,这妨碍了得出可靠的结论。初步证据表明,SCT-VR 是改善精神分裂症 SC 缺陷的一种可行且有前景的方法。
{"title":"Social cognition training using virtual reality for people with schizophrenia: a scoping review","authors":"D. A. Pérez-Ferrara, G. Y. Flores-Medina, E. Landa-Ramírez, D. J. González-Sánchez, J. A. Luna-Padilla, A. L. Sosa-Millán, A. Mondragón-Maya","doi":"10.1007/s10055-024-01010-2","DOIUrl":"https://doi.org/10.1007/s10055-024-01010-2","url":null,"abstract":"<p>To date, many interventions for social cognition have been developed. Nevertheless, the use of social cognition training with virtual reality (SCT-VR) in schizophrenia is a recent field of study. Therefore, a scoping review is a suitable method to examine the extent of existing literature, the characteristics of the studies, and the SCT-VR. Additionally, it allows us to summarize findings from a heterogeneous body of knowledge and identify gaps in the literature favoring the planning and conduct of future research. The aim of this review was to explore and describe the characteristics of SCT-VR in schizophrenia. The searched databases were MEDLINE, PsycInfo, Web of Science, and CINAHL. This scoping review considered experimental, quasi-experimental, analytical observational and descriptive observational study designs. The full text of selected citations was assessed by two independent reviewers. Data were extracted from papers included in the scoping review by two independent reviewers. We identified 1,407 records. A total of twelve studies were included for analyses. Study designs were variable, most research was proof-of-concept or pilot studies. Most SCT-VR were immersive and targeted interventions. Number of sessions ranged from 9 to 16, and the duration of each session ranged from 45 to 120 min. Some studies reported a significant improvement in emotion recognition and/or theory of mind. However, SCT-VR is a recent research field in which the heterogeneity in methodological approaches is evident and has prevented the reaching of robust conclusions. Preliminary evidence has shown that SCT-VR could represent a feasible and promising approach for improving SC deficits in schizophrenia.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"25 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141145869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Replicating outdoor environments using VR and ambisonics: a methodology for accurate audio-visual recording, processing and reproduction 利用虚拟现实和环境声复制室外环境:准确记录、处理和再现视听的方法论
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-05-17 DOI: 10.1007/s10055-024-01003-1
Fotis Georgiou, Claudia Kawai, Beat Schäffer, Reto Pieren

This paper introduces a methodology tailored to capture, post-process, and replicate audio-visual data of outdoor environments (urban or natural) for VR experiments carried out within a controlled laboratory environment. The methodology consists of 360(^circ) video and higher order ambisonic (HOA) field recordings and subsequent calibrated spatial sound reproduction with a spherical loudspeaker array and video played back via a head-mounted display using a game engine and a graphical user interface for a perceptual experimental questionnaire. Attention was given to the equalisation and calibration of the ambisonic microphone and to the design of different ambisonic decoders. A listening experiment was conducted to evaluate four different decoders (one 2D first-order ambisonic decoder and three 3D third-order decoders) by asking participants to rate the relative (perceived) realism of recorded outdoor soundscapes reproduced with these decoders. The results showed that the third-order decoders were ranked as more realistic.

本文介绍了一种为在受控实验室环境中进行的 VR 实验量身定制的方法,用于捕获、后处理和复制室外环境(城市或自然)的视听数据。该方法包括 360(^circ)视频和高阶环境声(HOA)现场录音,以及随后使用球形扬声器阵列和视频通过头戴式显示器进行校准的空间声音再现,并使用游戏引擎和图形用户界面进行感知实验问卷回放。对环境声麦克风的均衡和校准以及不同环境声解码器的设计给予了关注。通过听力实验,对四种不同的解码器(一种二维一阶环境声解码器和三种三维三阶解码器)进行了评估,要求参与者对使用这些解码器重现的室外声景的相对(感知)逼真度进行评分。结果显示,三阶解码器的逼真度更高。
{"title":"Replicating outdoor environments using VR and ambisonics: a methodology for accurate audio-visual recording, processing and reproduction","authors":"Fotis Georgiou, Claudia Kawai, Beat Schäffer, Reto Pieren","doi":"10.1007/s10055-024-01003-1","DOIUrl":"https://doi.org/10.1007/s10055-024-01003-1","url":null,"abstract":"<p>This paper introduces a methodology tailored to capture, post-process, and replicate audio-visual data of outdoor environments (urban or natural) for VR experiments carried out within a controlled laboratory environment. The methodology consists of 360<span>(^circ)</span> video and higher order ambisonic (HOA) field recordings and subsequent calibrated spatial sound reproduction with a spherical loudspeaker array and video played back via a head-mounted display using a game engine and a graphical user interface for a perceptual experimental questionnaire. Attention was given to the equalisation and calibration of the ambisonic microphone and to the design of different ambisonic decoders. A listening experiment was conducted to evaluate four different decoders (one 2D first-order ambisonic decoder and three 3D third-order decoders) by asking participants to rate the relative (perceived) realism of recorded outdoor soundscapes reproduced with these decoders. The results showed that the third-order decoders were ranked as more realistic.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"4 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141059830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Not just cybersickness: short-term effects of popular VR game mechanics on physical discomfort and reaction time 不仅仅是晕机:流行的 VR 游戏机制对身体不适和反应时间的短期影响
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-05-15 DOI: 10.1007/s10055-024-01007-x
Sara Vlahovic, Lea Skorin-Kapov, Mirko Suznjevic, Nina Pavlin-Bernardic

Uncomfortable sensations that arise during virtual reality (VR) use have always been among the industry’s biggest challenges. While certain VR-induced effects, such as cybersickness, have garnered a lot of interest from academia and industry over the years, others have been overlooked and underresearched. Recently, the research community has been calling for more holistic approaches to studying the issue of VR discomfort. Focusing on active VR gaming, our article presents the results of two user studies with a total of 40 participants. Incorporating state-of-the-art VR-specific measures (the Simulation Task Load Index—SIM-TLX, Cybersickness Questionnaire—CSQ, Virtual Reality Sickness Questionnaire—VRSQ) into our methodology, we examined workload, musculoskeletal discomfort, device-related discomfort, cybersickness, and changes in reaction time following VR gameplay. Using a set of six different active VR games (three per study), we attempted to quantify and compare the prevalence and intensity of VR-induced symptoms across different genres and game mechanics. Varying between individuals, as well as games, the diverse symptoms reported in our study highlight the importance of including measures of VR-induced effects other than cybersickness into VR gaming user studies, while questioning the suitability of the Simulator Sickness Questionnaire (SSQ)—arguably the most prevalent measure of VR discomfort in the field—for use with active VR gaming scenarios.

使用虚拟现实(VR)时产生的不舒适感觉一直是该行业面临的最大挑战之一。多年来,学术界和产业界对某些由虚拟现实引发的效应(如晕眩)产生了浓厚的兴趣,但对其他效应的研究却一直被忽视和不足。最近,研究界一直在呼吁采用更全面的方法来研究 VR 带来的不适。我们的文章以主动式 VR 游戏为重点,介绍了两项用户研究的结果,共有 40 人参与。我们将最先进的 VR 特定测量方法(模拟任务负荷指数-SIM-TLX、晕机问卷-CSQ、虚拟现实晕机问卷-VRSQ)纳入我们的研究方法,研究了 VR 游戏之后的工作量、肌肉骨骼不适、设备相关不适、晕机以及反应时间的变化。我们使用一组六款不同的主动式 VR 游戏(每项研究三款),试图量化和比较不同类型和游戏机制的 VR 诱发症状的发生率和强度。我们的研究中报告的各种症状因人而异,也因游戏而异,这凸显了在 VR 游戏用户研究中纳入除晕眩之外的其他 VR 诱发效应测量的重要性,同时也对模拟器晕眩问卷(SSQ)--可以说是该领域最普遍的 VR 不适测量方法--是否适用于主动式 VR 游戏场景提出了质疑。
{"title":"Not just cybersickness: short-term effects of popular VR game mechanics on physical discomfort and reaction time","authors":"Sara Vlahovic, Lea Skorin-Kapov, Mirko Suznjevic, Nina Pavlin-Bernardic","doi":"10.1007/s10055-024-01007-x","DOIUrl":"https://doi.org/10.1007/s10055-024-01007-x","url":null,"abstract":"<p>Uncomfortable sensations that arise during virtual reality (VR) use have always been among the industry’s biggest challenges. While certain VR-induced effects, such as cybersickness, have garnered a lot of interest from academia and industry over the years, others have been overlooked and underresearched. Recently, the research community has been calling for more holistic approaches to studying the issue of VR discomfort. Focusing on active VR gaming, our article presents the results of two user studies with a total of 40 participants. Incorporating state-of-the-art VR-specific measures (the Simulation Task Load Index—SIM-TLX, Cybersickness Questionnaire—CSQ, Virtual Reality Sickness Questionnaire—VRSQ) into our methodology, we examined workload, musculoskeletal discomfort, device-related discomfort, cybersickness, and changes in reaction time following VR gameplay. Using a set of six different active VR games (three per study), we attempted to quantify and compare the prevalence and intensity of VR-induced symptoms across different genres and game mechanics. Varying between individuals, as well as games, the diverse symptoms reported in our study highlight the importance of including measures of VR-induced effects other than cybersickness into VR gaming user studies, while questioning the suitability of the Simulator Sickness Questionnaire (SSQ)—arguably the most prevalent measure of VR discomfort in the field—for use with active VR gaming scenarios.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"304 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140939691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal emotion classification using machine learning in immersive and non-immersive virtual reality 在沉浸式和非沉浸式虚拟现实中使用机器学习进行多模态情感分类
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-05-06 DOI: 10.1007/s10055-024-00989-y
Rodrigo Lima, Alice Chirico, Rui Varandas, Hugo Gamboa, Andrea Gaggioli, Sergi Bermúdez i Badia

Affective computing has been widely used to detect and recognize emotional states. The main goal of this study was to detect emotional states using machine learning algorithms automatically. The experimental procedure involved eliciting emotional states using film clips in an immersive and non-immersive virtual reality setup. The participants’ physiological signals were recorded and analyzed to train machine learning models to recognize users’ emotional states. Furthermore, two subjective ratings emotional scales were provided to rate each emotional film clip. Results showed no significant differences between presenting the stimuli in the two degrees of immersion. Regarding emotion classification, it emerged that for both physiological signals and subjective ratings, user-dependent models have a better performance when compared to user-independent models. We obtained an average accuracy of 69.29 ± 11.41% and 71.00 ± 7.95% for the subjective ratings and physiological signals, respectively. On the other hand, using user-independent models, the accuracy we obtained was 54.0 ± 17.2% and 24.9 ± 4.0%, respectively. We interpreted these data as the result of high inter-subject variability among participants, suggesting the need for user-dependent classification models. In future works, we intend to develop new classification algorithms and transfer them to real-time implementation. This will make it possible to adapt to a virtual reality environment in real-time, according to the user’s emotional state.

情感计算已被广泛用于检测和识别情感状态。本研究的主要目标是利用机器学习算法自动检测情绪状态。实验过程包括在沉浸式和非沉浸式虚拟现实设置中使用电影片段激发情绪状态。实验记录并分析了参与者的生理信号,以训练机器学习模型识别用户的情绪状态。此外,还提供了两个主观评分情绪量表,对每个情绪电影片段进行评分。结果显示,在两种沉浸度下呈现的刺激没有明显差异。在情绪分类方面,对于生理信号和主观评分,依赖用户的模型比独立于用户的模型具有更好的性能。主观评分和生理信号的平均准确率分别为 69.29 ± 11.41% 和 71.00 ± 7.95%。另一方面,使用独立于用户的模型,我们获得的准确率分别为 54.0 ± 17.2% 和 24.9 ± 4.0%。我们将这些数据解释为受试者之间的高变异性造成的,这表明有必要建立与用户相关的分类模型。在今后的工作中,我们打算开发新的分类算法,并将其转化为实时实施。这将使根据用户的情绪状态实时适应虚拟现实环境成为可能。
{"title":"Multimodal emotion classification using machine learning in immersive and non-immersive virtual reality","authors":"Rodrigo Lima, Alice Chirico, Rui Varandas, Hugo Gamboa, Andrea Gaggioli, Sergi Bermúdez i Badia","doi":"10.1007/s10055-024-00989-y","DOIUrl":"https://doi.org/10.1007/s10055-024-00989-y","url":null,"abstract":"<p>Affective computing has been widely used to detect and recognize emotional states. The main goal of this study was to detect emotional states using machine learning algorithms automatically. The experimental procedure involved eliciting emotional states using film clips in an immersive and non-immersive virtual reality setup. The participants’ physiological signals were recorded and analyzed to train machine learning models to recognize users’ emotional states. Furthermore, two subjective ratings emotional scales were provided to rate each emotional film clip. Results showed no significant differences between presenting the stimuli in the two degrees of immersion. Regarding emotion classification, it emerged that for both physiological signals and subjective ratings, user-dependent models have a better performance when compared to user-independent models. We obtained an average accuracy of 69.29 ± 11.41% and 71.00 ± 7.95% for the subjective ratings and physiological signals, respectively. On the other hand, using user-independent models, the accuracy we obtained was 54.0 ± 17.2% and 24.9 ± 4.0%, respectively. We interpreted these data as the result of high inter-subject variability among participants, suggesting the need for user-dependent classification models. In future works, we intend to develop new classification algorithms and transfer them to real-time implementation. This will make it possible to adapt to a virtual reality environment in real-time, according to the user’s emotional state.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"107 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking and co-location of global point clouds for large-area indoor environments 大面积室内环境中的全球点云跟踪与共定位
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-05-04 DOI: 10.1007/s10055-024-01004-0
Nick Michiels, Lode Jorissen, Jeroen Put, Jori Liesenborgs, Isjtar Vandebroeck, Eric Joris, Frank Van Reeth

Extended reality (XR) experiences are on the verge of becoming widely adopted in diverse application domains. An essential part of the technology is accurate tracking and localization of the headset to create an immersive experience. A subset of the applications require perfect co-location between the real and the virtual world, where virtual objects are aligned with real-world counterparts. Current headsets support co-location for small areas, but suffer from drift when scaling up to larger ones such as buildings or factories. This paper proposes tools and solutions for this challenge by splitting up the simultaneous localization and mapping (SLAM) into separate mapping and localization stages. In the pre-processing stage, a feature map is built for the entire tracking area. A global optimizer is applied to correct the deformations caused by drift, guided by a sparse set of ground truth markers in the point cloud of a laser scan. Optionally, further refinement is applied by matching features between the ground truth keyframe images and their rendered-out SLAM estimates of the point cloud. In the second, real-time stage, the rectified feature map is used to perform localization and sensor fusion between the global tracking and the headset. The results show that the approach achieves robust co-location between the virtual and the real 3D environment for large and complex tracking environments.

扩展现实(XR)体验即将被广泛应用于各种应用领域。该技术的一个重要部分是对耳机进行精确跟踪和定位,以创造身临其境的体验。其中一部分应用要求在现实世界和虚拟世界之间实现完美的协同定位,即虚拟物体与现实世界中的对应物体保持一致。目前的头显支持小范围内的协同定位,但当扩展到建筑物或工厂等更大范围时,就会出现偏移。本文通过将同步定位和映射(SLAM)分成独立的映射和定位阶段,提出了应对这一挑战的工具和解决方案。在预处理阶段,为整个跟踪区域绘制特征图。在激光扫描点云中稀疏的地面实况标记集的引导下,应用全局优化器修正漂移引起的变形。此外,还可通过匹配地面实况关键帧图像与其渲染出的点云 SLAM 估计值之间的特征来进一步完善。在第二阶段,即实时阶段,校正后的特征图用于在全局跟踪和耳机之间执行定位和传感器融合。结果表明,该方法可在大型复杂跟踪环境中实现虚拟和真实三维环境之间的稳健协同定位。
{"title":"Tracking and co-location of global point clouds for large-area indoor environments","authors":"Nick Michiels, Lode Jorissen, Jeroen Put, Jori Liesenborgs, Isjtar Vandebroeck, Eric Joris, Frank Van Reeth","doi":"10.1007/s10055-024-01004-0","DOIUrl":"https://doi.org/10.1007/s10055-024-01004-0","url":null,"abstract":"<p>Extended reality (XR) experiences are on the verge of becoming widely adopted in diverse application domains. An essential part of the technology is accurate tracking and localization of the headset to create an immersive experience. A subset of the applications require perfect co-location between the real and the virtual world, where virtual objects are aligned with real-world counterparts. Current headsets support co-location for small areas, but suffer from drift when scaling up to larger ones such as buildings or factories. This paper proposes tools and solutions for this challenge by splitting up the simultaneous localization and mapping (SLAM) into separate mapping and localization stages. In the pre-processing stage, a feature map is built for the entire tracking area. A global optimizer is applied to correct the deformations caused by drift, guided by a sparse set of ground truth markers in the point cloud of a laser scan. Optionally, further refinement is applied by matching features between the ground truth keyframe images and their rendered-out SLAM estimates of the point cloud. In the second, real-time stage, the rectified feature map is used to perform localization and sensor fusion between the global tracking and the headset. The results show that the approach achieves robust co-location between the virtual and the real 3D environment for large and complex tracking environments.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"18 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PublicVR: a virtual reality exposure therapy intervention for adults with speech anxiety PublicVR:针对患有言语焦虑症的成年人的虚拟现实暴露疗法干预措施
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-04-30 DOI: 10.1007/s10055-024-00998-x
Fotios Spyridonis, Damon Daylamani-Zad, James Nightingale

Speech anxiety, or Glossophobia, currently affects approximately 75% of the population with potentially severe negative effects on those with this condition. There are several treatments currently available with research showing that the use of Virtual Reality (VR) as a non-pharmacologic treatment can have positive effects on individuals suffering from such social phobias. However, there is a significant lack of treatments currently available for speech anxiety, even though such a large number of the population are affected by it. In this paper, we aim to contribute to efforts to improve the effects of speech anxiety through a VR intervention. Our VR solution was designed following the Exposure Therapy approach for treating social anxiety disorders. The evaluation of this work was twofold: A. to assess the ability of our solution to positively change participants’ perception of factors related to non-verbal communication contributing to anxiety toward public speaking, and B. to determine whether it is able to induce a sense of presence. We carried out an empirical evaluation study that measured participants’ self-reported anxiety level towards public speaking using the Personal Report of Public Speaking Anxiety and their perceived sense of presence using the iGroup Presence Questionnaire. Our results demonstrate the potential of VR Exposure Therapy solutions to assist towards positively changing perception of factors related to non-verbal communication skills that contribute to increasing public speaking anxiety for participants suffering from self-reported speech anxiety symptoms. Our findings are of wider importance as they contribute to ongoing efforts to improve social anxiety-related phobias.

演讲焦虑症(或称 "语言恐惧症")目前影响着大约 75% 的人口,对患有这种疾病的人可能会产生严重的负面影响。目前有几种治疗方法,研究表明,使用虚拟现实(VR)作为非药物治疗方法可以对患有这种社交恐惧症的人产生积极影响。然而,尽管有如此多的人受到言语焦虑症的影响,但目前仍严重缺乏针对言语焦虑症的治疗方法。在本文中,我们希望通过 VR 干预来改善言语焦虑症的影响。我们的 VR 解决方案是按照治疗社交焦虑症的暴露疗法方法设计的。这项工作的评估有两个方面:A. 评估我们的解决方案是否能够积极改变参与者对造成公开演讲焦虑的非语言交流相关因素的看法,以及 B. 确定它是否能够诱发临场感。我们开展了一项实证评估研究,使用 "个人公开演讲焦虑报告"(Personal Report of Public Speaking Anxiety)测量参与者自我报告的公开演讲焦虑水平,并使用 "iGroup 临场感问卷"(iGroup Presence Questionnaire)测量他们感知到的临场感。我们的研究结果表明,VR 暴露疗法方案可以帮助患有自述演讲焦虑症状的参与者积极改变对非语言沟通技巧相关因素的认知,而这些因素会增加他们的公开演讲焦虑。我们的研究结果具有更广泛的意义,因为它们有助于改善与社交焦虑相关的恐惧症。
{"title":"PublicVR: a virtual reality exposure therapy intervention for adults with speech anxiety","authors":"Fotios Spyridonis, Damon Daylamani-Zad, James Nightingale","doi":"10.1007/s10055-024-00998-x","DOIUrl":"https://doi.org/10.1007/s10055-024-00998-x","url":null,"abstract":"<p>Speech anxiety, or Glossophobia, currently affects approximately 75% of the population with potentially severe negative effects on those with this condition. There are several treatments currently available with research showing that the use of Virtual Reality (VR) as a non-pharmacologic treatment can have positive effects on individuals suffering from such social phobias. However, there is a significant lack of treatments currently available for speech anxiety, even though such a large number of the population are affected by it. In this paper, we aim to contribute to efforts to improve the effects of speech anxiety through a VR intervention. Our VR solution was designed following the Exposure Therapy approach for treating social anxiety disorders. The evaluation of this work was twofold: A. to assess the ability of our solution to positively change participants’ perception of factors related to non-verbal communication contributing to anxiety toward public speaking, and B. to determine whether it is able to induce a sense of presence. We carried out an empirical evaluation study that measured participants’ self-reported anxiety level towards public speaking using the Personal Report of Public Speaking Anxiety and their perceived sense of presence using the iGroup Presence Questionnaire. Our results demonstrate the potential of VR Exposure Therapy solutions to assist towards positively changing perception of factors related to non-verbal communication skills that contribute to increasing public speaking anxiety for participants suffering from self-reported speech anxiety symptoms. Our findings are of wider importance as they contribute to ongoing efforts to improve social anxiety-related phobias.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"49 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140841121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of visual, auditory, and olfactory stimulus-based attractors for intermittent reorientation in virtual reality locomotion 评估虚拟现实运动中基于视觉、听觉和嗅觉刺激的间歇性方向调整吸引器
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-04-26 DOI: 10.1007/s10055-024-00997-y
Jieun Lee, Seokhyun Hwang, Kyunghwan Kim, SeungJun Kim

In virtual reality, redirected walking (RDW) enables users to stay within the tracking area while feeling that they are traveling in a virtual space that is larger than the physical space. RDW uses a visual attractor to the user’s sight and scene manipulation for intermittent reorientation. However, repeated usage can hinder the virtual world immersion and weaken the reorientation performance. In this study, we propose using sounds and smells as alternative stimuli to draw the user’s attention implicitly and sustain the attractor’s performance for intermittent reorientation. To achieve this, we integrated visual, auditory, and olfactory attractors into an all-in-one stimulation system. Experiments revealed that the auditory attractor caused the fastest reorientation, the olfactory attractor induced the widest angular difference, and the attractor with the combined auditory and olfactory stimuli induced the largest angular speed, keeping users from noticing the manipulation. The findings demonstrate the potential of nonvisual attractors to reorient users in situations requiring intermittent reorientation.

在虚拟现实中,重定向行走(RDW)使用户能够保持在跟踪区域内,同时感觉自己是在一个比物理空间更大的虚拟空间中行进。重定向行走利用视觉吸引器吸引用户的视线,并利用场景操作进行间歇性的重新定向。然而,反复使用会妨碍虚拟世界的沉浸感,削弱调整方向的性能。在本研究中,我们建议使用声音和气味作为替代刺激物,以隐性方式吸引用户的注意力,并维持间歇性方向调整吸引器的性能。为此,我们将视觉、听觉和嗅觉吸引器整合到一个一体化的刺激系统中。实验结果表明,听觉吸引器可引起最快的方向调整,嗅觉吸引器可引起最大的角度差,而结合了听觉和嗅觉刺激的吸引器可引起最大的角度速度,从而使用户不会注意到操作。研究结果表明,在需要间歇性调整方向的情况下,非视觉吸引子具有为用户调整方向的潜力。
{"title":"Evaluation of visual, auditory, and olfactory stimulus-based attractors for intermittent reorientation in virtual reality locomotion","authors":"Jieun Lee, Seokhyun Hwang, Kyunghwan Kim, SeungJun Kim","doi":"10.1007/s10055-024-00997-y","DOIUrl":"https://doi.org/10.1007/s10055-024-00997-y","url":null,"abstract":"<p>In virtual reality, redirected walking (RDW) enables users to stay within the tracking area while feeling that they are traveling in a virtual space that is larger than the physical space. RDW uses a visual attractor to the user’s sight and scene manipulation for intermittent reorientation. However, repeated usage can hinder the virtual world immersion and weaken the reorientation performance. In this study, we propose using sounds and smells as alternative stimuli to draw the user’s attention implicitly and sustain the attractor’s performance for intermittent reorientation. To achieve this, we integrated visual, auditory, and olfactory attractors into an all-in-one stimulation system. Experiments revealed that the auditory attractor caused the fastest reorientation, the olfactory attractor induced the widest angular difference, and the attractor with the combined auditory and olfactory stimuli induced the largest angular speed, keeping users from noticing the manipulation. The findings demonstrate the potential of nonvisual attractors to reorient users in situations requiring intermittent reorientation.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"36 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140806506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulating vision impairment in virtual reality: a comparison of visual task performance with real and simulated tunnel vision 在虚拟现实中模拟视力障碍:比较真实和模拟隧道视力下的视觉任务表现
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-04-16 DOI: 10.1007/s10055-024-00987-0
Alexander Neugebauer, Nora Castner, Björn Severitt, Katarina Stingl, Iliya Ivanov, Siegfried Wahl

In this work, we explore the potential and limitations of simulating gaze-contingent tunnel vision conditions using Virtual Reality (VR) with built-in eye tracking technology. This approach promises an easy and accessible way of expanding study populations and test groups for visual training, visual aids, or accessibility evaluations. However, it is crucial to assess the validity and reliability of simulating these types of visual impairments and evaluate the extend to which participants with simulated tunnel vision can represent real patients. Two age-matched participant groups were acquired: The first group (n = 8, aged 20–60, average 49.1 ± 13.2) consisted of patients diagnosed with Retinitis pigmentosa (RP). The second group (n = 8, aged 27–59, average 46.5 ± 10.8) consisted of visually healthy participants with simulated tunnel vision. Both groups carried out different visual tasks in a virtual environment for 30 min per day over the course of four weeks. Task performances as well as gaze characteristics were evaluated in both groups over the course of the study. Using the ’two one-sided tests for equivalence’ method, the two groups were found to perform similar in all three visual tasks. Significant differences between groups were found in different aspects of their gaze behavior, though most of these aspects seem to converge over time. Our study evaluates the potential and limitations of using Virtual Reality technology to simulate the effects of tunnel vision within controlled virtual environments. We find that the simulation accurately represents performance of RP patients in the context of group averages, but fails to fully replicate effects on gaze behavior.

在这项工作中,我们探索了使用内置眼动跟踪技术的虚拟现实(VR)模拟凝视相关隧道视觉条件的潜力和局限性。这种方法为扩大视觉训练、视觉辅助工具或无障碍评估的研究人群和测试组提供了一种简便易行的方法。然而,评估模拟这类视觉障碍的有效性和可靠性,以及评估模拟隧道视力的参与者在多大程度上可以代表真实患者,是至关重要的。我们获得了两组年龄匹配的参与者:第一组(n = 8,20-60 岁,平均 49.1 ± 13.2)由确诊为视网膜色素变性(RP)的患者组成。第二组(8 人,27-59 岁,平均 46.5 ± 10.8)由视力健康的模拟隧道视患者组成。两组人都在虚拟环境中执行不同的视觉任务,每天 30 分钟,为期四周。在研究过程中,对两组参与者的任务执行情况和注视特征进行了评估。通过 "两个单侧等效测试 "法,发现两组在所有三项视觉任务中的表现相似。研究发现,两组在注视行为的不同方面存在显著差异,但随着时间的推移,这些方面似乎大多趋于一致。我们的研究评估了使用虚拟现实技术在受控虚拟环境中模拟隧道视力影响的潜力和局限性。我们发现,模拟能够准确地反映 RP 患者在小组平均水平下的表现,但无法完全复制对注视行为的影响。
{"title":"Simulating vision impairment in virtual reality: a comparison of visual task performance with real and simulated tunnel vision","authors":"Alexander Neugebauer, Nora Castner, Björn Severitt, Katarina Stingl, Iliya Ivanov, Siegfried Wahl","doi":"10.1007/s10055-024-00987-0","DOIUrl":"https://doi.org/10.1007/s10055-024-00987-0","url":null,"abstract":"<p>In this work, we explore the potential and limitations of simulating gaze-contingent tunnel vision conditions using Virtual Reality (VR) with built-in eye tracking technology. This approach promises an easy and accessible way of expanding study populations and test groups for visual training, visual aids, or accessibility evaluations. However, it is crucial to assess the validity and reliability of simulating these types of visual impairments and evaluate the extend to which participants with simulated tunnel vision can represent real patients. Two age-matched participant groups were acquired: The first group (n = 8, aged 20–60, average 49.1 ± 13.2) consisted of patients diagnosed with Retinitis pigmentosa (RP). The second group (n = 8, aged 27–59, average 46.5 ± 10.8) consisted of visually healthy participants with simulated tunnel vision. Both groups carried out different visual tasks in a virtual environment for 30 min per day over the course of four weeks. Task performances as well as gaze characteristics were evaluated in both groups over the course of the study. Using the ’two one-sided tests for equivalence’ method, the two groups were found to perform similar in all three visual tasks. Significant differences between groups were found in different aspects of their gaze behavior, though most of these aspects seem to converge over time. Our study evaluates the potential and limitations of using Virtual Reality technology to simulate the effects of tunnel vision within controlled virtual environments. We find that the simulation accurately represents performance of RP patients in the context of group averages, but fails to fully replicate effects on gaze behavior.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"12 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140569448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Virtual Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1