首页 > 最新文献

2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
Coretet: A 21st Century Virtual Reality Musical Instrument for Solo and Networked Ensemble Performance Coretet:二十一世纪用于独奏和网络合奏的虚拟现实乐器
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797825
Rob Hamilton
Coretet is a virtual reality instrument that explores the translation of performance gesture and mechanic from traditional bowed string instruments into an inherently non-physical implementation. Built using the Unreal Engine 4 and Pure Data, Coretet offers musicians a flexible and articulate musical instrument to play as well as a networked performance environment capable of supporting and presenting a traditional four-member string quartet. This paper discusses the technical implementation of Coretet and explores the musical and performative possibilities through the translation of physical instrument design into virtual reality.
Coretet是一种虚拟现实乐器,它探索了将传统弓弦乐器的表演手势和机制转化为固有的非物理实现。Coretet使用虚幻引擎4和Pure Data构建,为音乐家提供灵活而清晰的乐器演奏以及能够支持和呈现传统四人弦乐四重奏的网络表演环境。本文讨论了Coretet的技术实现,并通过将物理乐器设计转化为虚拟现实来探索音乐和表演的可能性。
{"title":"Coretet: A 21st Century Virtual Reality Musical Instrument for Solo and Networked Ensemble Performance","authors":"Rob Hamilton","doi":"10.1109/VR.2019.8797825","DOIUrl":"https://doi.org/10.1109/VR.2019.8797825","url":null,"abstract":"Coretet is a virtual reality instrument that explores the translation of performance gesture and mechanic from traditional bowed string instruments into an inherently non-physical implementation. Built using the Unreal Engine 4 and Pure Data, Coretet offers musicians a flexible and articulate musical instrument to play as well as a networked performance environment capable of supporting and presenting a traditional four-member string quartet. This paper discusses the technical implementation of Coretet and explores the musical and performative possibilities through the translation of physical instrument design into virtual reality.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"416 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124176012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Scale - Unexplored Opportunities for Immersive Technologies in Place-based Learning 规模——在地学习中沉浸式技术的未开发机会
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797867
Jiayan Zhao, A. Klippel
Immersive technologies have the potential to overcome physical limitations and virtually deliver field site experiences, for example, into the classroom. Yet, little is known about the features of immersive technologies that contribute to successful place-based learning. Immersive technologies afford embodied experiences by mimicking natural embodied interactions through a user's egocentric perspective. Additionally, they allow for beyond reality experiences integrating contextual information that cannot be provided at actual field sites. The current study singles out one aspect of place-based learning: Scale. In an empirical evaluation, scale was manipulated as part of two immersive virtual field trip (iVFT) experiences in order to disentangle its effect on place-based learning. Students either attended an actual field trip (AFT) or experienced one of two iVFTs using a head-mounted display. The iVFTs either mimicked the actual field trip or provided beyond reality experiences offering access to the field site from an elevated perspective using pseudo-aerial 360° imagery. Results show that students with access to the elevated perspective had significantly better scores, for example, on their spatial situation model (SSM). Our findings provide first results on how an increased (geographic) scale, which is accessible through an elevated perspective, boosts the development of SSMs. The reported study is part of a larger immersive education effort. Inspired by the positive results, we discuss our plan for a more rigorous assessment of scale effects on both self- and objectively assessed performance measures of spatial learning.
沉浸式技术有可能克服物理限制,并将现场体验虚拟到教室中。然而,人们对沉浸式技术的特点知之甚少,这些特点有助于成功的基于地点的学习。沉浸式技术通过用户以自我为中心的视角,模仿自然的具身交互,从而提供具身体验。此外,它们还允许整合实际现场无法提供的上下文信息的超越现实的体验。目前的研究指出了在地学习的一个方面:规模。在一项实证评估中,量表被操纵为两个沉浸式虚拟实地考察(iVFT)体验的一部分,以澄清其对基于地点的学习的影响。学生们要么参加实际的实地考察(AFT),要么使用头戴式显示器体验两个ivft中的一个。ivft要么模拟实际的实地考察,要么提供超越现实的体验,利用伪空中360°图像从高架视角进入实地。结果表明,获得高视角的学生在空间情境模型(SSM)等方面的得分显著提高。我们的研究结果提供了第一个结果,说明通过更高的视角可以获得的(地理)规模的增加如何促进ssm的发展。报道的这项研究是一个更大的沉浸式教育努力的一部分。受到积极结果的启发,我们讨论了我们的计划,以更严格地评估空间学习的自我和客观评估绩效措施的规模效应。
{"title":"Scale - Unexplored Opportunities for Immersive Technologies in Place-based Learning","authors":"Jiayan Zhao, A. Klippel","doi":"10.1109/VR.2019.8797867","DOIUrl":"https://doi.org/10.1109/VR.2019.8797867","url":null,"abstract":"Immersive technologies have the potential to overcome physical limitations and virtually deliver field site experiences, for example, into the classroom. Yet, little is known about the features of immersive technologies that contribute to successful place-based learning. Immersive technologies afford embodied experiences by mimicking natural embodied interactions through a user's egocentric perspective. Additionally, they allow for beyond reality experiences integrating contextual information that cannot be provided at actual field sites. The current study singles out one aspect of place-based learning: Scale. In an empirical evaluation, scale was manipulated as part of two immersive virtual field trip (iVFT) experiences in order to disentangle its effect on place-based learning. Students either attended an actual field trip (AFT) or experienced one of two iVFTs using a head-mounted display. The iVFTs either mimicked the actual field trip or provided beyond reality experiences offering access to the field site from an elevated perspective using pseudo-aerial 360° imagery. Results show that students with access to the elevated perspective had significantly better scores, for example, on their spatial situation model (SSM). Our findings provide first results on how an increased (geographic) scale, which is accessible through an elevated perspective, boosts the development of SSMs. The reported study is part of a larger immersive education effort. Inspired by the positive results, we discuss our plan for a more rigorous assessment of scale effects on both self- and objectively assessed performance measures of spatial learning.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126826561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Live Stereoscopic 3D Image with Constant Capture Direction of 360°Cameras for High-Quality Visual Telepresence 实时立体三维图像与恒定捕获方向的360°摄像机为高质量的视觉远程呈现
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797876
Y. Ikei, Vibol Yem, Kento Tashiro, Toi Fujie, Tomohiro Amemiya, M. Kitazaki
To capture a remote 3D image, conventional stereo cameras attached to a robot head have been commonly used. However, when the head and cameras rotate, the captured image in buffers is degraded by latency and motion blur, which may cause VR sickness. In the present study, we propose a method named TwinCam in which we use two 360° cameras spaced at the standard interpupillary distance and keep the direction of the lens constant in the world coordinate even when the camera bodies are rotated to reflect the orientation of the observer's head and the position of the eyes. We consider that this method can suppress the image buffer size to send to the observer because each camera captures the omnidirectional image without lens rotation. This paper introduces the mechanical design of our camera system and its potential for visual telepresence through three experiments. Experiment 1 confirmed the requirement of a stereoscopic rather than monoscopic camera for highly accurate depth perception, and Experiments 2 and 3 proved that our mechanical camera setup can reduce motion blur and VR sickness.
为了捕捉远程3D图像,通常使用的是安装在机器人头上的传统立体摄像机。然而,当头部和相机旋转时,在缓冲区中捕获的图像会因延迟和运动模糊而降级,这可能会导致VR眩晕。在本研究中,我们提出了一种名为TwinCam的方法,我们使用两个360°的相机,在标准瞳孔间距上间隔,即使相机体旋转以反映观察者头部的方向和眼睛的位置,也保持镜头的方向在世界坐标中不变。我们认为这种方法可以抑制发送给观察者的图像缓冲大小,因为每个相机捕获的是不需要镜头旋转的全向图像。本文通过三个实验介绍了我们的摄像机系统的机械设计及其在视觉临场感方面的潜力。实验1证实了要获得高精度的深度感知需要立体摄像机而不是单镜摄像机,实验2和3证明了我们的机械摄像机设置可以减少运动模糊和VR眩晕。
{"title":"Live Stereoscopic 3D Image with Constant Capture Direction of 360°Cameras for High-Quality Visual Telepresence","authors":"Y. Ikei, Vibol Yem, Kento Tashiro, Toi Fujie, Tomohiro Amemiya, M. Kitazaki","doi":"10.1109/VR.2019.8797876","DOIUrl":"https://doi.org/10.1109/VR.2019.8797876","url":null,"abstract":"To capture a remote 3D image, conventional stereo cameras attached to a robot head have been commonly used. However, when the head and cameras rotate, the captured image in buffers is degraded by latency and motion blur, which may cause VR sickness. In the present study, we propose a method named TwinCam in which we use two 360° cameras spaced at the standard interpupillary distance and keep the direction of the lens constant in the world coordinate even when the camera bodies are rotated to reflect the orientation of the observer's head and the position of the eyes. We consider that this method can suppress the image buffer size to send to the observer because each camera captures the omnidirectional image without lens rotation. This paper introduces the mechanical design of our camera system and its potential for visual telepresence through three experiments. Experiment 1 confirmed the requirement of a stereoscopic rather than monoscopic camera for highly accurate depth perception, and Experiments 2 and 3 proved that our mechanical camera setup can reduce motion blur and VR sickness.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126840765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Combining Dynamic Field of View Modification with Physical Obstacle Avoidance 动态视场修正与物理避障相结合
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798015
Fei Wu, Evan Suma Rosenberg
Motion sickness is a major cause of discomfort for users of virtual reality (VR) systems. Over the past several years, several techniques have been proposed to mitigate motion sickness, such as high-quality “room-scale” tracking systems, dynamic field of view modification, and displaying static or dynamic rest frames. At the same time, an absence of real world spatial cues may cause trouble during movement in virtual reality, and users may collide with physical obstacles. To address both of these problems, we propose a novel technique that combines dynamic field of view modification with rest frames generated from 3D scans of the physical environment. As the users moves, either physically and/or virtually, the displayed field of view can be artificially reduced to reveal a wireframe visualization of the real world geometry in the periphery, rendered in the same reference frame as the user. Although empirical studies have not yet been conducted, informal testing suggests that this approach is a promising method for reducing motion sickness and improving user safety at the same time.
晕动病是虚拟现实(VR)系统用户感到不适的主要原因。在过去的几年中,已经提出了几种技术来减轻晕动病,例如高质量的“房间级”跟踪系统,动态视野修改以及显示静态或动态休息帧。与此同时,缺乏现实世界的空间线索可能会在虚拟现实的运动中造成麻烦,用户可能会与物理障碍物发生碰撞。为了解决这两个问题,我们提出了一种将动态视场修改与物理环境的3D扫描生成的休息帧相结合的新技术。当用户移动时,无论是物理的还是虚拟的,显示的视野都可以被人为地缩小,以显示外围真实世界几何图形的线框可视化,呈现在与用户相同的参考框架中。虽然还没有进行实证研究,但非正式的测试表明,这种方法是一种有希望的方法,可以减少晕动病,同时提高用户的安全性。
{"title":"Combining Dynamic Field of View Modification with Physical Obstacle Avoidance","authors":"Fei Wu, Evan Suma Rosenberg","doi":"10.1109/VR.2019.8798015","DOIUrl":"https://doi.org/10.1109/VR.2019.8798015","url":null,"abstract":"Motion sickness is a major cause of discomfort for users of virtual reality (VR) systems. Over the past several years, several techniques have been proposed to mitigate motion sickness, such as high-quality “room-scale” tracking systems, dynamic field of view modification, and displaying static or dynamic rest frames. At the same time, an absence of real world spatial cues may cause trouble during movement in virtual reality, and users may collide with physical obstacles. To address both of these problems, we propose a novel technique that combines dynamic field of view modification with rest frames generated from 3D scans of the physical environment. As the users moves, either physically and/or virtually, the displayed field of view can be artificially reduced to reveal a wireframe visualization of the real world geometry in the periphery, rendered in the same reference frame as the user. Although empirical studies have not yet been conducted, informal testing suggests that this approach is a promising method for reducing motion sickness and improving user safety at the same time.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125719870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Perception of Motion-Adaptive Color Images Displayed by a High-Speed DMD Projector 高速DMD投影仪显示运动自适应彩色图像的感知
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797850
Wakana Oshiro, S. Kagami, K. Hashimoto
Recent progress of high-speed projectors using DMD (Digital Micromirror Device) has enabled low-latency motion adaptability of displayed images, which is a key challenge in achieving projection-based dynamic interaction systems. This paper presents evaluation of different approaches in achieving fast motion adaptability with DMD projectors through a subjective image evaluation experiment and a discrimination experiment. The results suggest that the approach proposed by the authors, which updates the image position for every binary frame instead of for every video frame, applied to 60-fps video input offers perceptual image quality comparable with the quality offered by 500-fps projection.
使用DMD(数字微镜设备)的高速投影机的最新进展使显示图像的低延迟运动适应性成为可能,这是实现基于投影的动态交互系统的关键挑战。本文通过主观图像评价实验和判别实验,对DMD投影机实现快速运动适应性的不同方法进行了评价。结果表明,作者提出的方法,即为每个二进制帧而不是每个视频帧更新图像位置,应用于60帧/秒的视频输入,可以提供与500帧/秒投影提供的质量相当的感知图像质量。
{"title":"Perception of Motion-Adaptive Color Images Displayed by a High-Speed DMD Projector","authors":"Wakana Oshiro, S. Kagami, K. Hashimoto","doi":"10.1109/VR.2019.8797850","DOIUrl":"https://doi.org/10.1109/VR.2019.8797850","url":null,"abstract":"Recent progress of high-speed projectors using DMD (Digital Micromirror Device) has enabled low-latency motion adaptability of displayed images, which is a key challenge in achieving projection-based dynamic interaction systems. This paper presents evaluation of different approaches in achieving fast motion adaptability with DMD projectors through a subjective image evaluation experiment and a discrimination experiment. The results suggest that the approach proposed by the authors, which updates the image position for every binary frame instead of for every video frame, applied to 60-fps video input offers perceptual image quality comparable with the quality offered by 500-fps projection.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"624 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132181946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
EMBRACE - a VR piece about disability and inclusion (2018) 拥抱——关于残疾和包容的VR作品(2018)
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798138
F. Schroeder
“Embrace” is a work created as part of a UK AHRC (Arts Humanities Research Council) funded project on Immersive and Inclusive Music Technologies. The piece is for VR headset and was developed for one of the grant's proposed outputs. The research conducted investigated how emerging technologies (such as VR) can best be adopted to suit people with different abilities (movement impaired people for example). “Embrace” allows the viewer to experience issues around disability. It tells the story about two disabled musicians (one visually impaired and one wheelchair bound) and how both experience exclusion before a concert situation. We also find out some background with regards to the nature of their disability. The work wants to stimulate the viewer to embrace difference; hence the title “Embrace”. “Embrace” is a short immersive experience about inclusion and embracing difference. It was produced at the Sonic Arts Research Centre, Queen's University Belfast as part of the AHRC/EPSRC Next Generation of Immersive Experiences Programme 2018.
“拥抱”是英国人文艺术研究委员会资助的沉浸式和包容性音乐技术项目的一部分。这件作品是为VR耳机开发的,是为赠款的提议产出之一而开发的。这项研究调查了如何最好地采用新兴技术(如VR)来适应不同能力的人(例如运动障碍的人)。“拥抱”让观众体验到与残疾有关的问题。它讲述了两个残疾音乐家(一个视力受损,一个坐轮椅)的故事,以及他们在音乐会前如何被排斥。我们还发现了一些关于他们残疾性质的背景。作品想要激发观者拥抱差异;因此标题为“拥抱”。“拥抱”是一个关于包容和拥抱差异的短暂沉浸式体验。它是由贝尔法斯特女王大学音速艺术研究中心制作的,是2018年AHRC/EPSRC下一代沉浸式体验计划的一部分。
{"title":"EMBRACE - a VR piece about disability and inclusion (2018)","authors":"F. Schroeder","doi":"10.1109/VR.2019.8798138","DOIUrl":"https://doi.org/10.1109/VR.2019.8798138","url":null,"abstract":"“Embrace” is a work created as part of a UK AHRC (Arts Humanities Research Council) funded project on Immersive and Inclusive Music Technologies. The piece is for VR headset and was developed for one of the grant's proposed outputs. The research conducted investigated how emerging technologies (such as VR) can best be adopted to suit people with different abilities (movement impaired people for example). “Embrace” allows the viewer to experience issues around disability. It tells the story about two disabled musicians (one visually impaired and one wheelchair bound) and how both experience exclusion before a concert situation. We also find out some background with regards to the nature of their disability. The work wants to stimulate the viewer to embrace difference; hence the title “Embrace”. “Embrace” is a short immersive experience about inclusion and embracing difference. It was produced at the Sonic Arts Research Centre, Queen's University Belfast as part of the AHRC/EPSRC Next Generation of Immersive Experiences Programme 2018.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116367032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Effectiveness of Redirected Walking with Auditory Distractors for Navigation in Virtual Environments 评估虚拟环境中听觉干扰重定向行走导航的有效性
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798286
Nicholas Rewkowski, Atul Rungta, M. Whitton, M. Lin
Many virtual locomotion interfaces allowing users to move in virtual reality have been built and evaluated, such as redirected walking (RDW), walking-in-place (WIP), and joystick input. RDW has been shown to be among the most natural and immersive as it supports real walking, and many newer methods further adapt RDW to allow for customization and greater immersion. Most of these methods have been demonstrated to work with vision, in this paper we evaluate the ability for a general distractor-based RDW framework to be used with only auditory display. We conducted two studies evaluating the differences between RDW with auditory distractors and other distractor modalities using distraction ratio, virtual and physical path information, immersion, simulator sickness, and other measurements. Our results indicate that auditory RDW has the potential to be used with complex navigational tasks, such as crossing streets and avoiding obstacles. It can be used without designing the system specifically for audio-only users. Additionally, sense of presence and simulator sickness remain reasonable across all user groups.
许多允许用户在虚拟现实中移动的虚拟运动接口已经被建立和评估,例如重定向行走(RDW)、原地行走(WIP)和操纵杆输入。RDW已经被证明是最自然和身临其境的,因为它支持真正的行走,许多新的方法进一步适应RDW,以允许定制和更大的沉浸感。这些方法中的大多数已经被证明可以与视觉一起工作,在本文中,我们评估了基于干扰物的RDW框架仅用于听觉显示的能力。我们进行了两项研究,通过分心比、虚拟和物理路径信息、沉浸感、模拟器眩晕和其他测量来评估有听觉分心和其他分心方式的RDW的差异。我们的研究结果表明,听觉RDW有潜力用于复杂的导航任务,如过马路和避开障碍物。无需专门为音频用户设计系统就可以使用它。此外,存在感和模拟器病在所有用户群体中仍然是合理的。
{"title":"Evaluating the Effectiveness of Redirected Walking with Auditory Distractors for Navigation in Virtual Environments","authors":"Nicholas Rewkowski, Atul Rungta, M. Whitton, M. Lin","doi":"10.1109/VR.2019.8798286","DOIUrl":"https://doi.org/10.1109/VR.2019.8798286","url":null,"abstract":"Many virtual locomotion interfaces allowing users to move in virtual reality have been built and evaluated, such as redirected walking (RDW), walking-in-place (WIP), and joystick input. RDW has been shown to be among the most natural and immersive as it supports real walking, and many newer methods further adapt RDW to allow for customization and greater immersion. Most of these methods have been demonstrated to work with vision, in this paper we evaluate the ability for a general distractor-based RDW framework to be used with only auditory display. We conducted two studies evaluating the differences between RDW with auditory distractors and other distractor modalities using distraction ratio, virtual and physical path information, immersion, simulator sickness, and other measurements. Our results indicate that auditory RDW has the potential to be used with complex navigational tasks, such as crossing streets and avoiding obstacles. It can be used without designing the system specifically for audio-only users. Additionally, sense of presence and simulator sickness remain reasonable across all user groups.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114406269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
EPICSAVE Lifesaving Decisions - a Collaborative VR Training Game Sketch for Paramedics EPICSAVE救生决定-医护人员的协作VR培训游戏草图
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798365
Jonas Schild, Leonard Flock, Patrick Martens, Benjamin Roth, Niklas Schünernann, Eduard Heller, Sebastian Misztal
Practical, collaborative training of severe emergencies that occur too rarely within regular curricular training programs (e.g., anaphylactic shock in children patients) is difficult to realize. Multi-user virtual reality and serious game technologies can be used to provide collaborative training in dynamic settings [1], [2]. However, actual training effects seem to depend on a high presence and supportive usability [2]. EPICSAVE Lifesaving Decisions shows a novel approach that aims at further improving on these factors using an emotional scenario and collaborative game mechanics. We present a trailer video of a game sketch which creatively explores serious game design for collaborative virtual reality training systems. The game invites two paramedic trainees and one paramedic trainer into a dramatic scenario at a family theme park: A 5-year old child shows symptoms of anaphylactic shock. While the trainees begin their diagnostics procedures, a bystander, the girl's grandfather, intervenes and challenges the players' authority. Our research explores how VR game mechanics, i.e., optional narrative, authority skills and rewards, mini games, and interactive virtual characters may extend training quality and user experience over pure VR training simulations. The video exemplifies a concept that extends prior developments of a multi-user VR training simulation setup presented in [2], [3].
在常规课程培训计划中很少发生的严重紧急情况(例如,儿童患者的过敏性休克)的实际协作培训很难实现。多用户虚拟现实和严肃游戏技术可用于提供动态环境下的协同培训[1],[2]。然而,实际的训练效果似乎取决于高存在性和支持性可用性[2]。《EPICSAVE Lifesaving Decisions》展示了一种新颖的方法,旨在通过情感场景和协作游戏机制进一步改善这些因素。我们呈现了一个游戏草图的预告视频,创造性地探索了协作虚拟现实训练系统的严肃游戏设计。这款游戏邀请了两名急救培训生和一名急救培训师进入一个家庭主题公园的戏剧性场景:一个5岁的孩子出现了过敏性休克的症状。当受训者开始他们的诊断程序时,一个旁观者,女孩的祖父,介入并挑战球员的权威。我们的研究探讨了VR游戏机制,即可选的叙述、权威技能和奖励、小游戏和交互式虚拟角色如何在纯VR训练模拟的基础上提高训练质量和用户体验。该视频举例说明了一个概念,扩展了先前在[2],[3]中提出的多用户VR训练模拟设置的开发。
{"title":"EPICSAVE Lifesaving Decisions - a Collaborative VR Training Game Sketch for Paramedics","authors":"Jonas Schild, Leonard Flock, Patrick Martens, Benjamin Roth, Niklas Schünernann, Eduard Heller, Sebastian Misztal","doi":"10.1109/VR.2019.8798365","DOIUrl":"https://doi.org/10.1109/VR.2019.8798365","url":null,"abstract":"Practical, collaborative training of severe emergencies that occur too rarely within regular curricular training programs (e.g., anaphylactic shock in children patients) is difficult to realize. Multi-user virtual reality and serious game technologies can be used to provide collaborative training in dynamic settings [1], [2]. However, actual training effects seem to depend on a high presence and supportive usability [2]. EPICSAVE Lifesaving Decisions shows a novel approach that aims at further improving on these factors using an emotional scenario and collaborative game mechanics. We present a trailer video of a game sketch which creatively explores serious game design for collaborative virtual reality training systems. The game invites two paramedic trainees and one paramedic trainer into a dramatic scenario at a family theme park: A 5-year old child shows symptoms of anaphylactic shock. While the trainees begin their diagnostics procedures, a bystander, the girl's grandfather, intervenes and challenges the players' authority. Our research explores how VR game mechanics, i.e., optional narrative, authority skills and rewards, mini games, and interactive virtual characters may extend training quality and user experience over pure VR training simulations. The video exemplifies a concept that extends prior developments of a multi-user VR training simulation setup presented in [2], [3].","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121264306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Towards a Framework on Accessible and Social VR in Education 构建无障碍和社会化虚拟现实教育框架
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798100
Anthony Scavarelli, A. Arya, Robert J. Teather
In this extended abstract, we argue that for virtual reality to be a successful tool in social learning spaces (e.g. classrooms or museums) we must also look outside the virtual reality literature to provide greater focus on accessible and social collaborative content. We explore work within Computer Supported Collaborative Learning (CSCL) and social VR domains to move towards developing a design framework for socio-educational VR. We also briefly describe our work-in-progress application framework, Circles, including these features in WebVR.
在这篇扩展的摘要中,我们认为虚拟现实要成为社会学习空间(例如教室或博物馆)的成功工具,我们还必须将目光投向虚拟现实文献之外,以提供更多的关注可访问和社会协作内容。我们探索在计算机支持的协作学习(CSCL)和社会虚拟现实领域的工作,朝着开发社会教育虚拟现实的设计框架迈进。我们还简要介绍了我们正在开发的应用程序框架circle,包括WebVR中的这些功能。
{"title":"Towards a Framework on Accessible and Social VR in Education","authors":"Anthony Scavarelli, A. Arya, Robert J. Teather","doi":"10.1109/VR.2019.8798100","DOIUrl":"https://doi.org/10.1109/VR.2019.8798100","url":null,"abstract":"In this extended abstract, we argue that for virtual reality to be a successful tool in social learning spaces (e.g. classrooms or museums) we must also look outside the virtual reality literature to provide greater focus on accessible and social collaborative content. We explore work within Computer Supported Collaborative Learning (CSCL) and social VR domains to move towards developing a design framework for socio-educational VR. We also briefly describe our work-in-progress application framework, Circles, including these features in WebVR.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123216786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Sphere in Hand: Exploring Tangible Interaction with Immersive Spherical Visualizations 球体在手:探索与沉浸式球体可视化的有形交互
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797887
David Englmeier, Isabel Schönewald, A. Butz, Tobias Höllerer
The emerging possibilities of data analysis and exploration in virtual reality raise the question of how users can be best supported during such interactions. Spherical visualizations allow for convenient exploration of certain types of data. Our tangible sphere, exactly aligned with the sphere visualizations shown in VR, implements a very natural way of interaction and utilizes senses and skills trained in the real world. This work is motivated by the prospect to create in VR a low-cost, tangible, robust, handheld spherical display that would be difficult or impossible to implement as a physical display. Our concept enables it to gain insights about the impact of a fully tangible embodiment of a virtual object on task performance, comprehension of patterns, and user behavior. After a description of the implementation we discuss the advantages and disadvantages of our approach, taking into account different handheld spherical displays utilizing outside and inside projection.
虚拟现实中数据分析和探索的新可能性提出了一个问题,即如何在这种交互中最好地支持用户。球形可视化允许方便地探索某些类型的数据。我们的有形球体与VR中显示的球体可视化完全一致,实现了一种非常自然的交互方式,并利用了在现实世界中训练的感官和技能。这项工作的动机是在VR中创造一种低成本、有形、坚固的手持球形显示器,这种显示器很难或不可能作为物理显示器来实现。我们的概念使它能够深入了解虚拟对象的完全有形体现对任务性能、模式理解和用户行为的影响。在描述了实现之后,我们讨论了我们的方法的优点和缺点,考虑到使用外部和内部投影的不同手持球形显示器。
{"title":"Sphere in Hand: Exploring Tangible Interaction with Immersive Spherical Visualizations","authors":"David Englmeier, Isabel Schönewald, A. Butz, Tobias Höllerer","doi":"10.1109/VR.2019.8797887","DOIUrl":"https://doi.org/10.1109/VR.2019.8797887","url":null,"abstract":"The emerging possibilities of data analysis and exploration in virtual reality raise the question of how users can be best supported during such interactions. Spherical visualizations allow for convenient exploration of certain types of data. Our tangible sphere, exactly aligned with the sphere visualizations shown in VR, implements a very natural way of interaction and utilizes senses and skills trained in the real world. This work is motivated by the prospect to create in VR a low-cost, tangible, robust, handheld spherical display that would be difficult or impossible to implement as a physical display. Our concept enables it to gain insights about the impact of a fully tangible embodiment of a virtual object on task performance, comprehension of patterns, and user behavior. After a description of the implementation we discuss the advantages and disadvantages of our approach, taking into account different handheld spherical displays utilizing outside and inside projection.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"362 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122343983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1