首页 > 最新文献

2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
Keynote Speaker: Let's Unleash Entertainment! VR Possibilities Learned through Entertainment Facility “VR Zone” 主讲人:让我们释放娱乐!通过娱乐设施“VR Zone”学习VR的可能性
Pub Date : 2019-03-23 DOI: 10.1109/vr.2019.8798301
Junichiro Koyama, Y. Tamiya
We have developed and operated 23 different VR activities while developing/expanding our VR Entertainment Facility VR ZONE throughout the years in Odaiba (2016), Shinjuku (2017), and Osaka (2018). Leaning on these experiences, we will share some of our know-how regarding VR Entertainment�s qualities/development as well as future possibilities of VR Entertainment.
多年来,我们在台场(2016年)、新宿(2017年)和大阪(2018年)开发和扩展了VR娱乐设施VR区,并开发和运营了23种不同的VR活动。根据这些经验,我们将分享一些关于VR娱乐的质量/发展以及VR娱乐的未来可能性的知识。
{"title":"Keynote Speaker: Let's Unleash Entertainment! VR Possibilities Learned through Entertainment Facility “VR Zone”","authors":"Junichiro Koyama, Y. Tamiya","doi":"10.1109/vr.2019.8798301","DOIUrl":"https://doi.org/10.1109/vr.2019.8798301","url":null,"abstract":"We have developed and operated 23 different VR activities while developing/expanding our VR Entertainment Facility VR ZONE throughout the years in Odaiba (2016), Shinjuku (2017), and Osaka (2018). Leaning on these experiences, we will share some of our know-how regarding VR Entertainment�s qualities/development as well as future possibilities of VR Entertainment.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121415788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Camera System for Telepresence with Foveated Imaging 具有注视点成像的远程呈现混合摄像机系统
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798011
M. Syawaludin, Chanho Kim, Jae-In Hwang
To improve the telepresence sense of a local HMD user, a high-resolution view of the remote environment is necessary. However, current commodity omnidirectional camera could not support enough resolution for the human eye. Using a higher resolution omnidirectional camera is also infeasible because it will increase the streaming bandwidth. We propose a hybrid camera system that can convey a higher resolution for the HMD user viewport ROI region in available bandwidth range. The hybrid camera consists of a pair of omnidirectional and PTZ camera which is close to each other. The HMD user head orientation controls the PTZ camera orientation. The HMD user also controls the zooming level of the PTZ camera to achieve higher resolution up to PTZ camera maximum optical zoom level. The remote environment view obtained from each camera is streamed to the HMD user and then stitched into one combined view. This combined view simulates human visual system (HVS) phenomenon called foveation, where only a small part in the human view is in high resolution, and the rests are in low resolution.
为了提高本地HMD用户的临场感,需要远程环境的高分辨率视图。然而,目前市面上的全向相机还不能满足人眼的分辨率要求。使用更高分辨率的全向摄像头也是不可行的,因为它会增加流带宽。我们提出了一种混合相机系统,可以在可用带宽范围内为HMD用户视口ROI区域提供更高的分辨率。混合摄像机由一对全向摄像机和PTZ摄像机相互靠近组成。HMD用户头部方向控制PTZ相机方向。HMD用户还控制PTZ相机的变焦水平,以实现更高的分辨率,达到PTZ相机的最大光学变焦水平。从每个摄像头获得的远程环境视图传输给HMD用户,然后拼接成一个组合视图。这种组合视图模拟了人类视觉系统(HVS)中称为注视点的现象,即人类视图中只有一小部分是高分辨率的,其余部分是低分辨率的。
{"title":"Hybrid Camera System for Telepresence with Foveated Imaging","authors":"M. Syawaludin, Chanho Kim, Jae-In Hwang","doi":"10.1109/VR.2019.8798011","DOIUrl":"https://doi.org/10.1109/VR.2019.8798011","url":null,"abstract":"To improve the telepresence sense of a local HMD user, a high-resolution view of the remote environment is necessary. However, current commodity omnidirectional camera could not support enough resolution for the human eye. Using a higher resolution omnidirectional camera is also infeasible because it will increase the streaming bandwidth. We propose a hybrid camera system that can convey a higher resolution for the HMD user viewport ROI region in available bandwidth range. The hybrid camera consists of a pair of omnidirectional and PTZ camera which is close to each other. The HMD user head orientation controls the PTZ camera orientation. The HMD user also controls the zooming level of the PTZ camera to achieve higher resolution up to PTZ camera maximum optical zoom level. The remote environment view obtained from each camera is streamed to the HMD user and then stitched into one combined view. This combined view simulates human visual system (HVS) phenomenon called foveation, where only a small part in the human view is in high resolution, and the rests are in low resolution.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122894276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Virtual Reality and Photogrammetry for Improved Reproducibility of Human-Robot Interaction Studies 虚拟现实与摄影测量技术用于提高人机交互研究的再现性
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798186
Mark Murnane, Max Breitmeyer, Cynthia Matuszek, Don Engel
Collecting data in robotics, especially human-robot interactions, traditionally requires a physical robot in a prepared environment, that presents substantial scalability challenges. First, robots provide many possible points of system failure, while the availability of human participants is limited. Second, for tasks such as language learning, it is important to create environments that provide interesting’ varied use cases. Traditionally, this requires prepared physical spaces for each scenario being studied. Finally, the expense associated with acquiring robots and preparing spaces places serious limitations on the reproducible quality of experiments. We therefore propose a novel mechanism for using virtual reality to simulate robotic sensor data in a series of prepared scenarios. This allows for a reproducible dataset that other labs can recreate using commodity VR hardware. We demonstrate the effectiveness of this approach with an implementation that includes a simulated physical context, a reconstruction of a human actor, and a reconstruction of a robot. This evaluation shows that even a simple “sandbox” environment allows us to simulate robot sensor data, as well as the movement (e.g., view-port) and speech of humans interacting with the robot in a prescribed scenario.
在机器人技术中收集数据,特别是人机交互,传统上需要一个物理机器人在一个准备好的环境中,这提出了大量的可扩展性挑战。首先,机器人提供了许多可能的系统故障点,而人类参与者的可用性是有限的。其次,对于诸如语言学习之类的任务,创建提供有趣的“各种用例”的环境非常重要。传统上,这需要为所研究的每个场景准备物理空间。最后,与获取机器人和准备空间相关的费用严重限制了实验的可重复性。因此,我们提出了一种利用虚拟现实在一系列准备好的场景中模拟机器人传感器数据的新机制。这允许其他实验室可以使用商品VR硬件重新创建可重复的数据集。我们通过一个实现来证明这种方法的有效性,该实现包括模拟物理环境、人类演员的重建和机器人的重建。这一评估表明,即使是一个简单的“沙盒”环境也允许我们模拟机器人传感器数据,以及在规定的场景中人类与机器人交互的运动(例如,视口)和语音。
{"title":"Virtual Reality and Photogrammetry for Improved Reproducibility of Human-Robot Interaction Studies","authors":"Mark Murnane, Max Breitmeyer, Cynthia Matuszek, Don Engel","doi":"10.1109/VR.2019.8798186","DOIUrl":"https://doi.org/10.1109/VR.2019.8798186","url":null,"abstract":"Collecting data in robotics, especially human-robot interactions, traditionally requires a physical robot in a prepared environment, that presents substantial scalability challenges. First, robots provide many possible points of system failure, while the availability of human participants is limited. Second, for tasks such as language learning, it is important to create environments that provide interesting’ varied use cases. Traditionally, this requires prepared physical spaces for each scenario being studied. Finally, the expense associated with acquiring robots and preparing spaces places serious limitations on the reproducible quality of experiments. We therefore propose a novel mechanism for using virtual reality to simulate robotic sensor data in a series of prepared scenarios. This allows for a reproducible dataset that other labs can recreate using commodity VR hardware. We demonstrate the effectiveness of this approach with an implementation that includes a simulated physical context, a reconstruction of a human actor, and a reconstruction of a robot. This evaluation shows that even a simple “sandbox” environment allows us to simulate robot sensor data, as well as the movement (e.g., view-port) and speech of humans interacting with the robot in a prescribed scenario.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114618273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Simulation and Evaluation of Three-User Redirected Walking Algorithm in Shared Physical Spaces 共享物理空间中三用户重定向行走算法的仿真与评价
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798319
Tianyang Dong, Yifan Song, Yuqi Shen, Jing Fan
Shifting from single-person experiences to multi-user interactions is an inevitable trend of virtual reality technology. Existing methods primarily address the problem of one- or two-user redirected walking and do not respond to additional challenges related to potential collisions among three or more users who are moving both virtually and physically. To apply redirected walking to multiple users who are immersed in virtual reality experiences, we present a novel algorithm of three-user redirected walking in shared physical spaces. In addition, we present the steps to apply three-user redirected walking to multiplayer VR scene, where the users are divided into different groups based on the users' motion states. Therefore, this strategy can be applied to each group to address the challenges of redirected walking when there are more than three users. The results show that sharing a space using our three-user redirected walking algorithm is completely feasible.
从单人体验到多用户交互是虚拟现实技术发展的必然趋势。现有的方法主要解决一个或两个用户重定向行走的问题,而不能应对三个或更多虚拟和物理移动用户之间潜在碰撞的额外挑战。为了将重定向行走应用于沉浸在虚拟现实体验中的多个用户,我们提出了一种共享物理空间中三用户重定向行走的新算法。此外,我们还介绍了将三人重定向行走应用于多人虚拟现实场景的步骤,其中用户根据用户的运动状态分为不同的组。因此,该策略可以应用于每个组,以解决当有三个以上用户时重定向行走的挑战。结果表明,使用三用户重定向步行算法共享空间是完全可行的。
{"title":"Simulation and Evaluation of Three-User Redirected Walking Algorithm in Shared Physical Spaces","authors":"Tianyang Dong, Yifan Song, Yuqi Shen, Jing Fan","doi":"10.1109/VR.2019.8798319","DOIUrl":"https://doi.org/10.1109/VR.2019.8798319","url":null,"abstract":"Shifting from single-person experiences to multi-user interactions is an inevitable trend of virtual reality technology. Existing methods primarily address the problem of one- or two-user redirected walking and do not respond to additional challenges related to potential collisions among three or more users who are moving both virtually and physically. To apply redirected walking to multiple users who are immersed in virtual reality experiences, we present a novel algorithm of three-user redirected walking in shared physical spaces. In addition, we present the steps to apply three-user redirected walking to multiplayer VR scene, where the users are divided into different groups based on the users' motion states. Therefore, this strategy can be applied to each group to address the challenges of redirected walking when there are more than three users. The results show that sharing a space using our three-user redirected walking algorithm is completely feasible.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117061224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
In-Situ Labeling for Augmented Reality Language Learning 增强现实语言学习的原位标注
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798358
Brandon Huynh, J. Orlosky, Tobias Höllerer
Augmented Reality is a promising interaction paradigm for learning applications. It has the potential to improve learning outcomes by merging educational content with spatial cues and semantically relevant objects within a learner's everyday environment. The impact of such an interface could be comparable to the method of loci, a well known memory enhancement technique used by memory champions and polyglots. However, using Augmented Reality in this manner is still impractical for a number of reasons. Scalable object recognition and consistent labeling of objects is a significant challenge, and interaction with arbitrary (unmodeled) physical objects in AR scenes has consequently not been well explored. To help address these challenges, we present a framework for in-situ object labeling and selection in Augmented Reality, with a particular focus on language learning applications. Our framework uses a generalized object recognition model to identify objects in the world in real time, integrates eye tracking to facilitate selection and interaction within the interface, and incorporates a personalized learning model that dynamically adapts to student's growth. We show our current progress in the development of this system, including preliminary tests and benchmarks. We explore challenges with using such a system in practice, and discuss our vision for the future of AR language learning applications.
增强现实是一种很有前途的学习应用交互范例。它有可能通过将教育内容与学习者日常环境中的空间线索和语义相关对象相结合来提高学习效果。这种接口的影响可以与记忆位点方法相媲美,这是一种众所周知的记忆增强技术,被记忆冠军和多语言者使用。然而,由于一些原因,以这种方式使用增强现实仍然是不切实际的。可扩展的对象识别和一致的对象标签是一个重大挑战,因此在AR场景中与任意(未建模)物理对象的交互尚未得到很好的探索。为了帮助解决这些挑战,我们提出了一个增强现实中原位对象标记和选择的框架,特别关注语言学习应用。我们的框架使用了一个广义的物体识别模型来实时识别世界上的物体,集成了眼动追踪来促进界面内的选择和交互,并结合了一个动态适应学生成长的个性化学习模型。我们展示了该系统目前的开发进展,包括初步测试和基准测试。我们探讨了在实践中使用这样一个系统所面临的挑战,并讨论了我们对AR语言学习应用的未来愿景。
{"title":"In-Situ Labeling for Augmented Reality Language Learning","authors":"Brandon Huynh, J. Orlosky, Tobias Höllerer","doi":"10.1109/VR.2019.8798358","DOIUrl":"https://doi.org/10.1109/VR.2019.8798358","url":null,"abstract":"Augmented Reality is a promising interaction paradigm for learning applications. It has the potential to improve learning outcomes by merging educational content with spatial cues and semantically relevant objects within a learner's everyday environment. The impact of such an interface could be comparable to the method of loci, a well known memory enhancement technique used by memory champions and polyglots. However, using Augmented Reality in this manner is still impractical for a number of reasons. Scalable object recognition and consistent labeling of objects is a significant challenge, and interaction with arbitrary (unmodeled) physical objects in AR scenes has consequently not been well explored. To help address these challenges, we present a framework for in-situ object labeling and selection in Augmented Reality, with a particular focus on language learning applications. Our framework uses a generalized object recognition model to identify objects in the world in real time, integrates eye tracking to facilitate selection and interaction within the interface, and incorporates a personalized learning model that dynamically adapts to student's growth. We show our current progress in the development of this system, including preliminary tests and benchmarks. We explore challenges with using such a system in practice, and discuss our vision for the future of AR language learning applications.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"424 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129337844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Freely Explore the Scene with 360°Field of View 自由探索场景与360°的视野
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797922
Feng Dai, Chen Zhu, Yike Ma, Juan Cao, Qiang Zhao, Yongdong Zhang
By providing 360° field of view, spherical panoramas are widely used in virtual reality (VR) systems and street view services. However, due to bandwidth or storage limitations, existing systems only provide sparsely captured panoramas and have limited interaction modes. Although there are methods that can synthesize novel views based on captured panoramas, the generated novel views all lie on the lines connecting existing views. Therefore these methods do not support free viewpoint navigation. In this paper, we propose a new panoramic image based rendering method. Our method takes pre-captured images as input and can synthesize panoramas at novel views that are far from input camera positions. Thus, it supports to freely explore the scene with 360° field of view.
球面全景图通过提供360°视场,广泛应用于虚拟现实(VR)系统和街景服务中。然而,由于带宽或存储的限制,现有的系统只能提供稀疏捕获的全景图和有限的交互模式。虽然有一些方法可以根据捕获的全景合成新视图,但生成的新视图都位于连接现有视图的直线上。因此,这些方法不支持自由视点导航。本文提出了一种新的基于全景图像的绘制方法。我们的方法以预先捕获的图像作为输入,可以在远离输入摄像机位置的新视图上合成全景图。因此,它支持以360°视野自由探索场景。
{"title":"Freely Explore the Scene with 360°Field of View","authors":"Feng Dai, Chen Zhu, Yike Ma, Juan Cao, Qiang Zhao, Yongdong Zhang","doi":"10.1109/VR.2019.8797922","DOIUrl":"https://doi.org/10.1109/VR.2019.8797922","url":null,"abstract":"By providing 360° field of view, spherical panoramas are widely used in virtual reality (VR) systems and street view services. However, due to bandwidth or storage limitations, existing systems only provide sparsely captured panoramas and have limited interaction modes. Although there are methods that can synthesize novel views based on captured panoramas, the generated novel views all lie on the lines connecting existing views. Therefore these methods do not support free viewpoint navigation. In this paper, we propose a new panoramic image based rendering method. Our method takes pre-captured images as input and can synthesize panoramas at novel views that are far from input camera positions. Thus, it supports to freely explore the scene with 360° field of view.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122413386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Parasitic Body: Exploring Perspective Dependency in a Shared Body with a Third Arm 寄生体:探索具有第三条手臂的共享体中的视角依赖
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798351
Ryo Takizawa, Atsushi Hivarna, Adrien Verhulst, Katie Seaborn, M. Fukuoka, M. Kitazaki, M. Inami, Maki Suqirnoto
With advancements in robotics, systems featuring wearable robotic arms teleoperated by a third party are appearing. An important aspect of these systems is the visual feedback provided to the third party operator. This can be achieved by placing a wearable camera on the robotic arm's “host,” or Main Body Operator (MBO), but such a setup makes the visual feedback dependant on the movements of the main body. Here we introduce a VR system called Parasitic Body to explore a VR shared body concept representative of the wearable robotic arms “host” (the MBO) and of the teleoperator (here called the Parasite Body Operator (PBO)). 2 users jointly operate a shared virtual body with a third arm: The MBO controls the main body and the PBO controls a third arm sticking out from the left shoulder of the main body. We focused here on the perspective dependency of the PBO (indeed, the PBO view is dependant of the movement of the MBO) in a “finding and reaching” task.
随着机器人技术的进步,由第三方远程操作的可穿戴机械臂系统正在出现。这些系统的一个重要方面是提供给第三方操作员的视觉反馈。这可以通过在机械臂的“主人”或主体操作员(MBO)上放置一个可穿戴摄像头来实现,但这种设置使视觉反馈依赖于主体的运动。本文介绍一种名为寄生体(Parasitic Body)的虚拟现实系统,探讨以可穿戴机械臂“宿主”(MBO)和远程操作者(PBO)为代表的虚拟现实共享身体概念。2个用户通过第三条手臂共同操作共享的虚拟身体:MBO控制主体,PBO控制从主体左肩伸出的第三条手臂。我们在这里关注的是PBO在“发现和达成”任务中的视角依赖性(事实上,PBO视图依赖于MBO的运动)。
{"title":"Parasitic Body: Exploring Perspective Dependency in a Shared Body with a Third Arm","authors":"Ryo Takizawa, Atsushi Hivarna, Adrien Verhulst, Katie Seaborn, M. Fukuoka, M. Kitazaki, M. Inami, Maki Suqirnoto","doi":"10.1109/VR.2019.8798351","DOIUrl":"https://doi.org/10.1109/VR.2019.8798351","url":null,"abstract":"With advancements in robotics, systems featuring wearable robotic arms teleoperated by a third party are appearing. An important aspect of these systems is the visual feedback provided to the third party operator. This can be achieved by placing a wearable camera on the robotic arm's “host,” or Main Body Operator (MBO), but such a setup makes the visual feedback dependant on the movements of the main body. Here we introduce a VR system called Parasitic Body to explore a VR shared body concept representative of the wearable robotic arms “host” (the MBO) and of the teleoperator (here called the Parasite Body Operator (PBO)). 2 users jointly operate a shared virtual body with a third arm: The MBO controls the main body and the PBO controls a third arm sticking out from the left shoulder of the main body. We focused here on the perspective dependency of the PBO (indeed, the PBO view is dependant of the movement of the MBO) in a “finding and reaching” task.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130712202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Collaborative Data Analytics Using Virtual Reality 使用虚拟现实的协同数据分析
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797845
Huyen Nguyen, Benjamin Ward, U. Engelke, B. Thomas, Tomasz Bednarz
Immersive analytics allows a large amount of data and complex structures to be concurrently investigated. We propose a collaborative analytics system that benefits from new advances in immersive technologies for collaborators working in the early stages of data exploration. We implemented a combination of Star Coordinates and Star Plot visualisation techniques to support the visualisation of multidimensional data and the encoding of datasets using simple and compact visual representations. To support data analytics tasks, we propose tools and interaction techniques for users to build decision trees for visualising and analysing data in a top-down method.
沉浸式分析允许同时调查大量数据和复杂结构。我们提出了一个协作分析系统,该系统受益于沉浸式技术的新进展,适用于在数据探索的早期阶段工作的合作者。我们实现了星坐标和星图可视化技术的组合,以支持多维数据的可视化,并使用简单紧凑的可视化表示对数据集进行编码。为了支持数据分析任务,我们提出了工具和交互技术,供用户构建决策树,以自上而下的方法可视化和分析数据。
{"title":"Collaborative Data Analytics Using Virtual Reality","authors":"Huyen Nguyen, Benjamin Ward, U. Engelke, B. Thomas, Tomasz Bednarz","doi":"10.1109/VR.2019.8797845","DOIUrl":"https://doi.org/10.1109/VR.2019.8797845","url":null,"abstract":"Immersive analytics allows a large amount of data and complex structures to be concurrently investigated. We propose a collaborative analytics system that benefits from new advances in immersive technologies for collaborators working in the early stages of data exploration. We implemented a combination of Star Coordinates and Star Plot visualisation techniques to support the visualisation of multidimensional data and the encoding of datasets using simple and compact visual representations. To support data analytics tasks, we propose tools and interaction techniques for users to build decision trees for visualising and analysing data in a top-down method.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133195838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Estimation of Detection Thresholds for Redirected Turning 重定向转弯检测阈值的估计
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797976
Junya Mizutani, Keigo Matsumoto, Ryohei Nagao, Takuji Narumi, T. Tanikawa, M. Hirose
Redirection makes it possible to walk around a vast virtual space in a limited real space while providing a natural walking sensation by applying a gain to the amount of movement in a real space. However, manipulating the walking path while keeping it and maintaining the naturalness of walking when turning at a corner cannot be achieved by the existing methods. To realize natural manipulation for turning at a corner, this study proposes novel “turning gains”, which refer to the increase in real and virtual turning degrees. The result of an experiment which aims to estimate the detection thresholds of turning gains indicated that when the turning radius is 0.5 m, discrimination is more difficult compared with the rotation gains $(r=0.0mathrm{m})$.
重定向使得在有限的真实空间中行走在广阔的虚拟空间中成为可能,同时通过在真实空间中应用增益来提供自然的行走感觉。然而,现有的方法无法实现在保持步行路径的同时对步行路径进行操纵,以及在转弯时保持步行的自然性。为了实现转弯的自然操纵,本研究提出了一种新的“转弯增益”,即真实转弯度和虚拟转弯度的增加。对转弯增益检测阈值进行估计的实验结果表明,当转弯半径为0.5 m时,与旋转增益$(r=0.0 mathm {m})$相比,识别难度更大。
{"title":"Estimation of Detection Thresholds for Redirected Turning","authors":"Junya Mizutani, Keigo Matsumoto, Ryohei Nagao, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1109/VR.2019.8797976","DOIUrl":"https://doi.org/10.1109/VR.2019.8797976","url":null,"abstract":"Redirection makes it possible to walk around a vast virtual space in a limited real space while providing a natural walking sensation by applying a gain to the amount of movement in a real space. However, manipulating the walking path while keeping it and maintaining the naturalness of walking when turning at a corner cannot be achieved by the existing methods. To realize natural manipulation for turning at a corner, this study proposes novel “turning gains”, which refer to the increase in real and virtual turning degrees. The result of an experiment which aims to estimate the detection thresholds of turning gains indicated that when the turning radius is 0.5 m, discrimination is more difficult compared with the rotation gains $(r=0.0mathrm{m})$.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122149827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Vibro-tactile Feedback for Real-world Awareness in Immersive Virtual Environments 沉浸式虚拟环境中真实世界感知的振动触觉反馈
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798036
Dimitar Valkov, L. Linsen
In immersive virtual environments (IVE), users' visual and auditory perception is replaced by computer-generated stimuli. Thus, knowing the positions of real objects is crucial for physical safety. While some solutions exist, e. g., using virtual replicas or visible cues indicating the interaction space boundaries, these are limiting the IVE design or depend on the hardware setup. Moreover, most solutions cannot handle lost tracking, erroneous tracker calibration, or moving obstacles. However, these are common scenarios especially for the increasingly popular home virtual reality settings. In this paper, we present a stand-alone hardware device designed to alert IVE users for potential collisions with real-world objects. It uses distance sensors mounted on a head-mounted display (HMD) and vibro-tactile actuators inserted into the HMD's face cushion. We implemented different types of sensor-actuator mappings with the goal to find a mapping function that is minimally obtrusive in normal use, but efficiently alerting in risk situations.
在沉浸式虚拟环境(IVE)中,用户的视觉和听觉感知被计算机生成的刺激所取代。因此,了解真实物体的位置对人身安全至关重要。虽然存在一些解决方案,例如,使用虚拟副本或显示交互空间边界的可见线索,但这些都限制了IVE设计或依赖于硬件设置。此外,大多数解决方案无法处理跟踪丢失、跟踪器校准错误或移动障碍物。然而,这些都是常见的场景,特别是对于日益流行的家庭虚拟现实设置。在本文中,我们提出了一个独立的硬件设备,旨在提醒IVE用户与现实世界物体的潜在碰撞。它使用安装在头戴式显示器(HMD)上的距离传感器和插入HMD面部垫的振动触觉驱动器。我们实现了不同类型的传感器-执行器映射,目的是找到一种映射功能,在正常使用中最小程度地干扰,但在危险情况下有效地警报。
{"title":"Vibro-tactile Feedback for Real-world Awareness in Immersive Virtual Environments","authors":"Dimitar Valkov, L. Linsen","doi":"10.1109/VR.2019.8798036","DOIUrl":"https://doi.org/10.1109/VR.2019.8798036","url":null,"abstract":"In immersive virtual environments (IVE), users' visual and auditory perception is replaced by computer-generated stimuli. Thus, knowing the positions of real objects is crucial for physical safety. While some solutions exist, e. g., using virtual replicas or visible cues indicating the interaction space boundaries, these are limiting the IVE design or depend on the hardware setup. Moreover, most solutions cannot handle lost tracking, erroneous tracker calibration, or moving obstacles. However, these are common scenarios especially for the increasingly popular home virtual reality settings. In this paper, we present a stand-alone hardware device designed to alert IVE users for potential collisions with real-world objects. It uses distance sensors mounted on a head-mounted display (HMD) and vibro-tactile actuators inserted into the HMD's face cushion. We implemented different types of sensor-actuator mappings with the goal to find a mapping function that is minimally obtrusive in normal use, but efficiently alerting in risk situations.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123959343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1