首页 > 最新文献

2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
A New 360 Camera Design for Multi Format VR Experiences 一种用于多格式VR体验的全新360摄像机设计
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798226
Xinyu Zhang, Yao Zhao, Nikk Mitchell, Wensong Li
We present a new 360 camera design for creating 360 videos for immersive VR experiences. We place eight fish-eye lenses on a circle. Four interlaced fish-eye lenses are slightly re-oriented up in order to cover the scene above. To the best of our knowledge, our camera has the smallest diameter of any existing stereo multi-lens rig on the market. Our camera can be used to create 2D, 3D and 6DoF multi-format 360 videos. Due to its compact design, the minimum safe distance of our new camera is very short (approximately 30cm). This allows users to create special intimate immersive experiences. We also propose to characterize the camera design using the fractal ratio of the distance of adjacent view points and interpupillary distance. While most early camera designs have fractal ratio $> 1$ or $=1$, our camera has the fractal ratio $(< 1)$. Moreover, with adjustable rendering interpupillary distance, our camera can be used to flexibly control the interpupillary distance for creating 3D 360 videos. Our camera design has high fault tolerance and it can continue operating properly even in the event of the failure of some individual lenses.
我们提出了一种新的360相机设计,用于为沉浸式VR体验创建360视频。我们把八个鱼眼透镜放在一个圆圈上。为了覆盖上面的场景,四个交错的鱼眼镜头稍微向上调整。据我们所知,我们的相机有任何现有的立体多镜头钻机市场上的最小直径。我们的相机可用于创建2D, 3D和6DoF多格式360视频。由于其紧凑的设计,我们的新相机的最小安全距离非常短(约30厘米)。这允许用户创造特殊的亲密身临其境的体验。我们还提出了用相邻视点距离与瞳孔间距的分形比来表征摄像机设计的特征。虽然大多数早期相机设计的分形比$> 1$或$=1$,但我们的相机的分形比$(< 1)$。此外,我们的摄像机具有可调的渲染瞳距,可以灵活地控制瞳距,以创建3D 360视频。我们的相机设计具有很高的容错性,即使个别镜头出现故障,也能继续正常工作。
{"title":"A New 360 Camera Design for Multi Format VR Experiences","authors":"Xinyu Zhang, Yao Zhao, Nikk Mitchell, Wensong Li","doi":"10.1109/VR.2019.8798226","DOIUrl":"https://doi.org/10.1109/VR.2019.8798226","url":null,"abstract":"We present a new 360 camera design for creating 360 videos for immersive VR experiences. We place eight fish-eye lenses on a circle. Four interlaced fish-eye lenses are slightly re-oriented up in order to cover the scene above. To the best of our knowledge, our camera has the smallest diameter of any existing stereo multi-lens rig on the market. Our camera can be used to create 2D, 3D and 6DoF multi-format 360 videos. Due to its compact design, the minimum safe distance of our new camera is very short (approximately 30cm). This allows users to create special intimate immersive experiences. We also propose to characterize the camera design using the fractal ratio of the distance of adjacent view points and interpupillary distance. While most early camera designs have fractal ratio $> 1$ or $=1$, our camera has the fractal ratio $(< 1)$. Moreover, with adjustable rendering interpupillary distance, our camera can be used to flexibly control the interpupillary distance for creating 3D 360 videos. Our camera design has high fault tolerance and it can continue operating properly even in the event of the failure of some individual lenses.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124129193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Project Butterfly: Synergizing Immersive Virtual Reality with Actuated Soft Exosuit for Upper-Extremity Rehabilitation 项目蝴蝶:用于上肢康复的沉浸式虚拟现实与驱动式软外套的协同作用
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798014
Aviv Elor, Steven Lessard, M. Teodorescu, S. Kurniawan
Immersive Virtual Reality paired with soft robotics may be syner-gized to create personalized assistive therapy experiences. Virtual worlds hold power to stimulate the user with newly instigated low-cost, high-performance commercial Virtual Reality (VR) devices to enable engaging and accurate physical therapy. Soft robotic wear-ables are a versatile tool in such stimulation. This preliminary study investigates a novel rehabilitative VR experience, Project Butterfly (PBF), that synergizes VR Mirror Visual Feedback Therapy with soft robotic exoskeletal support. Nine users of ranging ability explore an immersive gamified physio-therapy experience by following and protecting a virtual butterfly, completed with an actuated robotic wearable that motivates and assists the user to perform rehabilitative physical movement. Specifically, the goals of this study are to evaluate the feasibility, ease-of-use, and comfort of the proposed system. The study concludes with a set of design considerations for future immersive physio-rehab robotic-assisted games.
沉浸式虚拟现实与软机器人可以协同创造个性化的辅助治疗体验。虚拟世界拥有刺激用户使用新推出的低成本,高性能的商业虚拟现实(VR)设备的能力,以实现引人入胜和准确的物理治疗。软机器人可穿戴设备是这种刺激的多功能工具。这项初步研究调查了一种新型的康复VR体验,蝴蝶项目(PBF),它将VR镜像视觉反馈疗法与软机器人外骨骼支持相结合。九个具有测距能力的用户通过跟随和保护一只虚拟蝴蝶来探索身临其境的游戏化物理治疗体验,并使用驱动的可穿戴机器人来激励和帮助用户进行康复的物理运动。具体来说,本研究的目的是评估所提出的系统的可行性、易用性和舒适性。该研究总结了未来沉浸式物理康复机器人辅助游戏的一系列设计考虑。
{"title":"Project Butterfly: Synergizing Immersive Virtual Reality with Actuated Soft Exosuit for Upper-Extremity Rehabilitation","authors":"Aviv Elor, Steven Lessard, M. Teodorescu, S. Kurniawan","doi":"10.1109/VR.2019.8798014","DOIUrl":"https://doi.org/10.1109/VR.2019.8798014","url":null,"abstract":"Immersive Virtual Reality paired with soft robotics may be syner-gized to create personalized assistive therapy experiences. Virtual worlds hold power to stimulate the user with newly instigated low-cost, high-performance commercial Virtual Reality (VR) devices to enable engaging and accurate physical therapy. Soft robotic wear-ables are a versatile tool in such stimulation. This preliminary study investigates a novel rehabilitative VR experience, Project Butterfly (PBF), that synergizes VR Mirror Visual Feedback Therapy with soft robotic exoskeletal support. Nine users of ranging ability explore an immersive gamified physio-therapy experience by following and protecting a virtual butterfly, completed with an actuated robotic wearable that motivates and assists the user to perform rehabilitative physical movement. Specifically, the goals of this study are to evaluate the feasibility, ease-of-use, and comfort of the proposed system. The study concludes with a set of design considerations for future immersive physio-rehab robotic-assisted games.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116942559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Large-Scale Projection-Based Immersive Display: The Design and Implementation of LargeSpace 基于投影的大规模沉浸式显示:LargeSpace的设计与实现
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798019
Hikaru Takatori, M. Hiraiwa, H. Yano, Hiroo Iwata
In this paper, we introduce LargeSpace, the world's largest immersive display, and discuss the principles of its design. To clarify the design of large-scale projection-based immersive displays, we address the optimum screen shape, projection approach, and arrangement of projectors and tracking cameras. In addition, a novel distortion correction method for panoramic stereo rendering is described. The method can be applied to any projection-based immersive display with any screen shape, and can generate real-time panoramic-stereoscopic views from the viewpoints of tracked participants. To validate the design principles and the rendering algorithm, we implement the LargeSpace and confirm that the method can generate the correct perspective from any position inside the screen viewing area. We implement several applications and show that large-scale immersive displays can be used in the fields of art and experimental psychology.
本文介绍了世界上最大的沉浸式显示器LargeSpace,并讨论了其设计原则。为了阐明基于投影的大型沉浸式显示器的设计,我们讨论了最佳屏幕形状、投影方法以及投影仪和跟踪摄像机的布置。此外,还提出了一种新的全景立体渲染失真校正方法。该方法可以应用于任何基于投影的沉浸式显示,具有任何屏幕形状,并可以从被跟踪参与者的视点生成实时全景立体视图。为了验证设计原则和渲染算法,我们实现了LargeSpace,并确认该方法可以从屏幕观看区域内的任何位置生成正确的透视图。我们实现了几个应用,并表明大规模沉浸式显示器可以用于艺术和实验心理学领域。
{"title":"Large-Scale Projection-Based Immersive Display: The Design and Implementation of LargeSpace","authors":"Hikaru Takatori, M. Hiraiwa, H. Yano, Hiroo Iwata","doi":"10.1109/VR.2019.8798019","DOIUrl":"https://doi.org/10.1109/VR.2019.8798019","url":null,"abstract":"In this paper, we introduce LargeSpace, the world's largest immersive display, and discuss the principles of its design. To clarify the design of large-scale projection-based immersive displays, we address the optimum screen shape, projection approach, and arrangement of projectors and tracking cameras. In addition, a novel distortion correction method for panoramic stereo rendering is described. The method can be applied to any projection-based immersive display with any screen shape, and can generate real-time panoramic-stereoscopic views from the viewpoints of tracked participants. To validate the design principles and the rendering algorithm, we implement the LargeSpace and confirm that the method can generate the correct perspective from any position inside the screen viewing area. We implement several applications and show that large-scale immersive displays can be used in the fields of art and experimental psychology.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125761972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Optical System That Forms a Mid-Air Image Moving at High Speed in the Depth Direction 形成在深度方向高速移动的空中图像的光学系统
Pub Date : 2019-03-07 DOI: 10.1109/VR.2019.8798235
Yui Osato, Naoya Koizurni
Mid-air imaging technology expresses how virtual images move about in the real world. A conventional mid-air image display using a retro-transmissive optical element moves a light source the distance a mid-air image is moved. In conventional mid-air image displays, the linear actuator that moves a display as a light source makes the system large. In order to solve this problem, we designed an optical system that realizes high-speed movement of mid-air images without a linear actuator. We propose an optical system that moves the virtual image of the light source at a high speed by generating the virtual image of the light source with a rotating mirror and light source by the motor.
空中成像技术表达了虚拟图像如何在现实世界中移动。传统的使用反透射光学元件的空中图像显示器使光源移动到空中图像移动的距离。在传统的空中图像显示中,作为光源移动显示器的线性执行器使系统变大。为了解决这一问题,我们设计了一种无需线性驱动器即可实现空中图像高速移动的光学系统。我们提出了一种利用旋转镜和光源由马达产生光源虚像的高速移动光源虚像的光学系统。
{"title":"Optical System That Forms a Mid-Air Image Moving at High Speed in the Depth Direction","authors":"Yui Osato, Naoya Koizurni","doi":"10.1109/VR.2019.8798235","DOIUrl":"https://doi.org/10.1109/VR.2019.8798235","url":null,"abstract":"Mid-air imaging technology expresses how virtual images move about in the real world. A conventional mid-air image display using a retro-transmissive optical element moves a light source the distance a mid-air image is moved. In conventional mid-air image displays, the linear actuator that moves a display as a light source makes the system large. In order to solve this problem, we designed an optical system that realizes high-speed movement of mid-air images without a linear actuator. We propose an optical system that moves the virtual image of the light source at a high speed by generating the virtual image of the light source with a rotating mirror and light source by the motor.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130087796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceived Space and Spatial Performance during Path-Integration Tasks in Consumer-Oriented Virtual Reality Environments 面向消费者的虚拟现实环境中路径整合任务中的感知空间和空间表现
Pub Date : 2019-03-01 DOI: 10.1109/VR.2019.8798344
José L. Dorado, Pablo Fiqueroa, J. Chardonnet, F. Mérienne, J. T. Hernández
Studies using virtual reality environments (VE) have shown that subjects can perform path integration tasks with acceptable performance. However, in these studies, subjects could walk naturally across large tracking areas, or researchers provided them with large- immersive displays. Unfortunately, these configurations are far from current consumer-oriented VEs (COVEs), and little is known about how their limitations influence this task. Using a triangle completion paradigm, we assessed the subjects' spatial performance when developing path integration tasks in two consumer-oriented displays (an HTC Vive and a GearVR) and two consumer-oriented interaction devices (a Virtuix Omni motion platform and a Touchpad Control). Our results show that when locomotion is available (motion platform condition), there exist significant effects regarding the display and the path. In contrast, when locomotion is mediated no effect was found. Some future research directions are therefore proposed.
使用虚拟现实环境(VE)的研究表明,受试者能够以可接受的性能执行路径整合任务。然而,在这些研究中,受试者可以自然地走过大片跟踪区域,或者研究人员为他们提供了大型沉浸式显示器。不幸的是,这些配置与当前面向消费者的ve (cove)相去甚远,而且很少有人知道它们的局限性如何影响这项任务。使用三角形完成范式,我们评估了受试者在两个面向消费者的显示器(HTC Vive和GearVR)和两个面向消费者的交互设备(Virtuix Omni运动平台和触摸板控制)中开发路径整合任务时的空间表现。研究结果表明,在有运动条件(运动平台条件)时,显示和路径都有显著的影响。相反,当运动介导时,没有发现任何影响。并提出了今后的研究方向。
{"title":"Perceived Space and Spatial Performance during Path-Integration Tasks in Consumer-Oriented Virtual Reality Environments","authors":"José L. Dorado, Pablo Fiqueroa, J. Chardonnet, F. Mérienne, J. T. Hernández","doi":"10.1109/VR.2019.8798344","DOIUrl":"https://doi.org/10.1109/VR.2019.8798344","url":null,"abstract":"Studies using virtual reality environments (VE) have shown that subjects can perform path integration tasks with acceptable performance. However, in these studies, subjects could walk naturally across large tracking areas, or researchers provided them with large- immersive displays. Unfortunately, these configurations are far from current consumer-oriented VEs (COVEs), and little is known about how their limitations influence this task. Using a triangle completion paradigm, we assessed the subjects' spatial performance when developing path integration tasks in two consumer-oriented displays (an HTC Vive and a GearVR) and two consumer-oriented interaction devices (a Virtuix Omni motion platform and a Touchpad Control). Our results show that when locomotion is available (motion platform condition), there exist significant effects regarding the display and the path. In contrast, when locomotion is mediated no effect was found. Some future research directions are therefore proposed.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115285944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Reorient the Gazed Scene Towards the Center: Novel Virtual Turning Using Head and Gaze Motions and Blink 将被凝视的场景重新定位到中心:使用头部和凝视运动和眨眼的新颖虚拟转向
Pub Date : 2019-03-01 DOI: 10.1109/VR.2019.8798120
Yoshikazu Onuki, I. Kumazawa
Novel virtual turning for stationary VR environments, accomplishing to reorient the gazed view towards the center, is proposed. Prompt reorientation during rapid head motion and blinking performed unnoticeable scene switching that achieved the seamless user experience, especially for the wide-angle turning. Whereas, continuous narrow-angle turning by horizontally rotating the virtual world corresponding to the face orientation achieved enhanced sense of reality. The proposal comprises a hybrid of these two turning schemes. Experiments using simulator sickness and presence questionnaires revealed that our methods achieved comparable or lower sickness scores and higher presence scores than conventional smooth and snap turns.
提出了一种适用于静止虚拟现实环境的虚拟转向方法,实现了视线向中心的重新定向。在快速的头部运动和眨眼过程中,快速的重新定向执行了不明显的场景切换,实现了无缝的用户体验,特别是对于广角转弯。而通过水平旋转虚拟世界与人脸方向相对应的连续窄角度旋转,增强了虚拟世界的真实感。该方案包括这两种转弯方案的混合。使用模拟恶心和在场问卷的实验表明,我们的方法获得了与传统的平滑和快速转弯相当或更低的恶心得分和更高的在场得分。
{"title":"Reorient the Gazed Scene Towards the Center: Novel Virtual Turning Using Head and Gaze Motions and Blink","authors":"Yoshikazu Onuki, I. Kumazawa","doi":"10.1109/VR.2019.8798120","DOIUrl":"https://doi.org/10.1109/VR.2019.8798120","url":null,"abstract":"Novel virtual turning for stationary VR environments, accomplishing to reorient the gazed view towards the center, is proposed. Prompt reorientation during rapid head motion and blinking performed unnoticeable scene switching that achieved the seamless user experience, especially for the wide-angle turning. Whereas, continuous narrow-angle turning by horizontally rotating the virtual world corresponding to the face orientation achieved enhanced sense of reality. The proposal comprises a hybrid of these two turning schemes. Experiments using simulator sickness and presence questionnaires revealed that our methods achieved comparable or lower sickness scores and higher presence scores than conventional smooth and snap turns.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125212602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
iVRNote: Design, Creation and Evaluation of an Interactive Note-Taking Interface for Study and Reflection in VR Learning Environments iVRNote: VR学习环境中用于学习和反思的交互式笔记界面的设计、创建和评估
Pub Date : 2019-03-01 DOI: 10.1109/VR.2019.8798338
Yi-Ting Chen, Chi-Hsuan Hsu, Chih-Han Chung, Yu-Shuen Wang, Sabarish V. Babu
In this contribution, we design, implement and evaluate the pedagogical benefits of a novel interactive note taking interface (iVRNote) in VR for the purpose of learning and reflection lectures. In future VR learning environments, students would have challenges in taking notes when they wear a head mounted display (HMD). To solve this problem, we installed a digital tablet on the desk and provided several tools in VR to facilitate the learning experience. Specifically, we track the stylus' position and orientation in the physical world and then render a virtual stylus in VR. In other words, when students see a virtual stylus somewhere on the desk, they can reach out with their hand for the physical stylus. The information provided will also enable them to know where they will draw or write before the stylus touches the tablet. Since the presented iVRNote featuring our note taking system is a digital environment, we also enable students save efforts in taking extensive notes by providing several functions, such as post-editing and picture taking, so that they can pay more attention to lectures in VR. We also record the time of each stroke on the note to help students review a lecture. They can select a part of their note to revisit the corresponding segment in a virtual online lecture. Figures and the accompanying video demonstrate the feasibility of the presented iVRNote system. To evaluate the system, we conducted a user study with 20 participants to assess the preference and pedagogical benefits of the iVRNote interface. The feedback provided by the participants were overall positive and indicated that the iVRNote interface could be potentially effective in VR learning experiences.
在这篇文章中,我们设计、实施和评估了一种新的交互式笔记界面(iVRNote)在VR中的教学效益,用于学习和反思讲座。在未来的虚拟现实学习环境中,当学生戴上头戴式显示器(HMD)时,他们会遇到记笔记的挑战。为了解决这个问题,我们在桌子上安装了一个数字平板电脑,并提供了几个VR工具,以方便学习体验。具体来说,我们跟踪触控笔在物理世界中的位置和方向,然后在VR中渲染虚拟触控笔。换句话说,当学生们在桌子上看到一个虚拟的手写笔时,他们可以伸手去拿物理的手写笔。提供的信息还将使他们在触控笔接触平板电脑之前知道他们将在哪里画画或写字。由于我们所展示的iVRNote笔记系统是一个数字环境,我们还提供了后期编辑、拍照等功能,让学生在做大量笔记时省力,让他们能够更加专注于VR中的讲座。我们还在笔记上记录每一笔的时间,以帮助学生复习讲课。他们可以选择笔记的一部分,在虚拟在线讲座中重新阅读相应的部分。图和附带的视频演示了所提出的iVRNote系统的可行性。为了评估该系统,我们对20名参与者进行了一项用户研究,以评估iVRNote界面的偏好和教学效益。参与者提供的反馈总体上是积极的,表明iVRNote界面在VR学习体验中可能是有效的。
{"title":"iVRNote: Design, Creation and Evaluation of an Interactive Note-Taking Interface for Study and Reflection in VR Learning Environments","authors":"Yi-Ting Chen, Chi-Hsuan Hsu, Chih-Han Chung, Yu-Shuen Wang, Sabarish V. Babu","doi":"10.1109/VR.2019.8798338","DOIUrl":"https://doi.org/10.1109/VR.2019.8798338","url":null,"abstract":"In this contribution, we design, implement and evaluate the pedagogical benefits of a novel interactive note taking interface (iVRNote) in VR for the purpose of learning and reflection lectures. In future VR learning environments, students would have challenges in taking notes when they wear a head mounted display (HMD). To solve this problem, we installed a digital tablet on the desk and provided several tools in VR to facilitate the learning experience. Specifically, we track the stylus' position and orientation in the physical world and then render a virtual stylus in VR. In other words, when students see a virtual stylus somewhere on the desk, they can reach out with their hand for the physical stylus. The information provided will also enable them to know where they will draw or write before the stylus touches the tablet. Since the presented iVRNote featuring our note taking system is a digital environment, we also enable students save efforts in taking extensive notes by providing several functions, such as post-editing and picture taking, so that they can pay more attention to lectures in VR. We also record the time of each stroke on the note to help students review a lecture. They can select a part of their note to revisit the corresponding segment in a virtual online lecture. Figures and the accompanying video demonstrate the feasibility of the presented iVRNote system. To evaluate the system, we conducted a user study with 20 participants to assess the preference and pedagogical benefits of the iVRNote interface. The feedback provided by the participants were overall positive and indicated that the iVRNote interface could be potentially effective in VR learning experiences.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116712822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Augmented Reality Map Navigation with Freehand Gestures 增强现实地图导航与徒手手势
Pub Date : 2019-03-01 DOI: 10.1109/VR.2019.8798340
Kadek Ananta Satriadi, Barrett Ens, Maxime Cordeil, B. Jenny, Tobias Czauderna, Wesley Willett
Freehand gesture interaction has long been proposed as a ‘natural’ input method for Augmented Reality (AR) applications, yet has been little explored for intensive applications like multiscale navigation. In multiscale navigation, such as digital map navigation, pan and zoom are the predominant interactions. A position-based input mapping (e.g. grabbing metaphor) is intuitive for such interactions, but is prone to arm fatigue. This work focuses on improving digital map navigation in AR with mid-air hand gestures, using a horizontal intangible map display. First, we conducted a user study to explore the effects of handedness (unimanual and bimanual) and input mapping (position-based and rate-based). From these findings we designed DiveZoom and TerraceZoom, two novel hybrid techniques that smoothly transition between position- and rate-based mappings. A second user study evaluated these designs. Our results indicate that the introduced input-mapping transitions can reduce perceived arm fatigue with limited impact on performance.
长期以来,徒手手势交互一直被认为是增强现实(AR)应用的一种“自然”输入法,但在多尺度导航等密集应用中却很少被探索。在多比例尺导航中,如数字地图导航,平移和缩放是主要的交互方式。基于位置的输入映射(例如抓取隐喻)对于这种交互是直观的,但容易导致手臂疲劳。这项工作的重点是改进AR中的数字地图导航与空中手势,使用水平无形地图显示。首先,我们进行了一项用户研究,探讨了利手性(单手和双手)和输入映射(基于位置和基于速率)的影响。根据这些发现,我们设计了DiveZoom和TerraceZoom这两种新型混合技术,可以在基于位置和基于速率的映射之间顺利过渡。第二项用户研究评估了这些设计。我们的研究结果表明,引入的输入映射转换可以减少感知到的手臂疲劳,但对性能的影响有限。
{"title":"Augmented Reality Map Navigation with Freehand Gestures","authors":"Kadek Ananta Satriadi, Barrett Ens, Maxime Cordeil, B. Jenny, Tobias Czauderna, Wesley Willett","doi":"10.1109/VR.2019.8798340","DOIUrl":"https://doi.org/10.1109/VR.2019.8798340","url":null,"abstract":"Freehand gesture interaction has long been proposed as a ‘natural’ input method for Augmented Reality (AR) applications, yet has been little explored for intensive applications like multiscale navigation. In multiscale navigation, such as digital map navigation, pan and zoom are the predominant interactions. A position-based input mapping (e.g. grabbing metaphor) is intuitive for such interactions, but is prone to arm fatigue. This work focuses on improving digital map navigation in AR with mid-air hand gestures, using a horizontal intangible map display. First, we conducted a user study to explore the effects of handedness (unimanual and bimanual) and input mapping (position-based and rate-based). From these findings we designed DiveZoom and TerraceZoom, two novel hybrid techniques that smoothly transition between position- and rate-based mappings. A second user study evaluated these designs. Our results indicate that the introduced input-mapping transitions can reduce perceived arm fatigue with limited impact on performance.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117161111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Generating Synthetic Humans for Learning 3D Pose Estimation 生成用于学习3D姿态估计的合成人
Pub Date : 2019-03-01 DOI: 10.1109/VR.2019.8797894
Kohei Aso, D. Hwang, H. Koike
We generate synthetic annotated data for learning 3D human pose estimation using an egocentric fisheye camera. Synthetic humans are rendered from a virtual fisheye camera, with a random background, random clothing, random lighting parameters. In addition to RGB images, we generate ground truth of 2D/3D poses and location heat-maps. Capturing huge and various images and labeling manually for learning are not required. This approach will be used for the challenging situation such as capturing training data in sports.
我们使用以自我为中心的鱼眼相机生成用于学习3D人体姿态估计的合成注释数据。合成人是由虚拟鱼眼相机渲染的,具有随机背景,随机服装,随机照明参数。除了RGB图像外,我们还生成2D/3D姿势和位置热图的地面真相。不需要捕获大量不同的图像并手动标记以进行学习。这种方法将用于具有挑战性的情况,例如捕获运动中的训练数据。
{"title":"Generating Synthetic Humans for Learning 3D Pose Estimation","authors":"Kohei Aso, D. Hwang, H. Koike","doi":"10.1109/VR.2019.8797894","DOIUrl":"https://doi.org/10.1109/VR.2019.8797894","url":null,"abstract":"We generate synthetic annotated data for learning 3D human pose estimation using an egocentric fisheye camera. Synthetic humans are rendered from a virtual fisheye camera, with a random background, random clothing, random lighting parameters. In addition to RGB images, we generate ground truth of 2D/3D poses and location heat-maps. Capturing huge and various images and labeling manually for learning are not required. This approach will be used for the challenging situation such as capturing training data in sports.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127453389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
VR-HMD Eye Tracker in Active Visual Field Testing VR-HMD眼动仪在主动视野测试中的应用
Pub Date : 2019-03-01 DOI: 10.1109/VR.2019.8798030
Katsuyoshi Hotta, O. Prima, Takashi Imabuchi, Hisayoshi Ito
Visual field defects (VFDs) is difficult to recognize by most patients because of the filling-in mechanism in the human brain. The current visual field test displays light sources within the range of the effective visual field and takes the responses from the patient after recognizing this light stimulus. Since, these responses are determined subjectively by the patient, the resulted measure may be less reliable. This method may take more than 30 minutes, requiring the patient to fix his gaze and head where it may give a physical burden in the patient. In this study, we propose an active visual field testing (AVFT) based on a high-speed virtual reality head-mounted display (VR-HMD) eye tracker which enables to increase the testing reliability and to reduce the physical burden during the test. Our tracker runs up to 240Hz allowing the measurement of rapid eye movement to precisely detect visual fixation and saccades which provide essential elements to evaluate defects in the visual field. The characteristics of visual fixation and saccades are utilized to confirm when each stimulus is recognized by the patient during the test. Our experiment shows that each test can be conducted in 5 minutes.
由于人脑的填充机制,视野缺陷(vfd)难以被大多数患者识别。目前的视野测试显示的是有效视野范围内的光源,并获取患者识别该光刺激后的反应。由于这些反应是由患者主观决定的,因此结果测量可能不太可靠。这种方法可能需要30多分钟,需要患者将目光和头部固定在可能给患者带来身体负担的地方。在本研究中,我们提出了一种基于高速虚拟现实头戴式显示器(VR-HMD)眼动仪的主动视野测试(AVFT),可以提高测试的可靠性并减轻测试过程中的身体负担。我们的跟踪器运行高达240Hz,允许测量快速眼球运动,以精确检测视觉固定和扫视,这为评估视野缺陷提供了基本要素。在测试过程中,利用视固定和扫视的特征来确认每个刺激何时被患者识别。我们的实验表明,每次测试可以在5分钟内完成。
{"title":"VR-HMD Eye Tracker in Active Visual Field Testing","authors":"Katsuyoshi Hotta, O. Prima, Takashi Imabuchi, Hisayoshi Ito","doi":"10.1109/VR.2019.8798030","DOIUrl":"https://doi.org/10.1109/VR.2019.8798030","url":null,"abstract":"Visual field defects (VFDs) is difficult to recognize by most patients because of the filling-in mechanism in the human brain. The current visual field test displays light sources within the range of the effective visual field and takes the responses from the patient after recognizing this light stimulus. Since, these responses are determined subjectively by the patient, the resulted measure may be less reliable. This method may take more than 30 minutes, requiring the patient to fix his gaze and head where it may give a physical burden in the patient. In this study, we propose an active visual field testing (AVFT) based on a high-speed virtual reality head-mounted display (VR-HMD) eye tracker which enables to increase the testing reliability and to reduce the physical burden during the test. Our tracker runs up to 240Hz allowing the measurement of rapid eye movement to precisely detect visual fixation and saccades which provide essential elements to evaluate defects in the visual field. The characteristics of visual fixation and saccades are utilized to confirm when each stimulus is recognized by the patient during the test. Our experiment shows that each test can be conducted in 5 minutes.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123769245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1