首页 > 最新文献

2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
Designing Viewpoint Transition Techniques in Multiscale Virtual Environments 多尺度虚拟环境中的视点转换技术设计
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00083
Jong-In Lee, P. Asente, W. Stuerzlinger
Viewpoint transitions have been shown to improve users' spatial orientation and help them build a cognitive map when they are navigating an unfamiliar virtual environment. Previous work has investigated transitions in single-scale virtual environments, focusing on trajectories and continuity. We extend this work with an in-depth investigation of transition techniques in multiscale virtual environments (MVEs). We identify challenges in navigating MVEs with nested structures and assess how different transition techniques affect spatial understanding and usability. Through two user studies, we investigated transition trajectories, interactive control of transition movement, and speed modulation in a nested MVE. We show that some types of viewpoint transitions enhance users' spatial awareness and confidence in their spatial orientation and reduce the need to revisit a target point of interest multiple times.
视点转换已被证明可以改善用户的空间方向,并帮助他们在不熟悉的虚拟环境中导航时建立认知地图。以前的工作研究了单尺度虚拟环境中的过渡,重点是轨迹和连续性。我们通过深入研究多尺度虚拟环境(MVEs)中的转换技术来扩展这项工作。我们确定了使用嵌套结构导航mve的挑战,并评估了不同的转换技术如何影响空间理解和可用性。通过两个用户研究,我们研究了过渡轨迹、过渡运动的交互控制和嵌套MVE中的速度调制。我们表明,某些类型的视点转换增强了用户的空间意识和对其空间方向的信心,并减少了多次重新访问目标感兴趣点的需要。
{"title":"Designing Viewpoint Transition Techniques in Multiscale Virtual Environments","authors":"Jong-In Lee, P. Asente, W. Stuerzlinger","doi":"10.1109/VR55154.2023.00083","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00083","url":null,"abstract":"Viewpoint transitions have been shown to improve users' spatial orientation and help them build a cognitive map when they are navigating an unfamiliar virtual environment. Previous work has investigated transitions in single-scale virtual environments, focusing on trajectories and continuity. We extend this work with an in-depth investigation of transition techniques in multiscale virtual environments (MVEs). We identify challenges in navigating MVEs with nested structures and assess how different transition techniques affect spatial understanding and usability. Through two user studies, we investigated transition trajectories, interactive control of transition movement, and speed modulation in a nested MVE. We show that some types of viewpoint transitions enhance users' spatial awareness and confidence in their spatial orientation and reduce the need to revisit a target point of interest multiple times.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132323084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Style-aware Augmented Virtuality Embeddings (SAVE) 风格感知增强虚拟嵌入(SAVE)
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00032
John L. Hoster, Dennis Ritter, Kristian Hildebrand
We present an augmented virtuality (AV) pipeline that enables the user to interact with real-world objects through stylised representations which match the VR scene and thereby preserve immersion. It consists of three stages: First, the object of interest is reconstructed from images and corresponding camera poses recorded with the VR headset, or alternatively a retrieval model finds a fitting mesh from the ShapeNet dataset. Second, a style transfer technique adapts the mesh to the VR game scene in order to preserve consistent immersion. Third, the stylised mesh is superimposed on the real object in real time to ensure interactivity even if the real object is moved. Our pipeline serves as proof of concept for style-aware AV embeddings.
我们提出了一个增强虚拟(AV)管道,使用户能够通过与VR场景相匹配的程式化表示与现实世界的对象进行交互,从而保持沉浸感。它包括三个阶段:首先,从VR头显记录的图像和相应的相机姿势重建感兴趣的对象,或者检索模型从ShapeNet数据集找到拟合网格。其次,风格转移技术使网格适应VR游戏场景,以保持一致的沉浸感。第三,将风格化的网格实时叠加在真实物体上,以确保即使真实物体移动也能保持交互性。我们的管道可以作为样式感知AV嵌入的概念证明。
{"title":"Style-aware Augmented Virtuality Embeddings (SAVE)","authors":"John L. Hoster, Dennis Ritter, Kristian Hildebrand","doi":"10.1109/VR55154.2023.00032","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00032","url":null,"abstract":"We present an augmented virtuality (AV) pipeline that enables the user to interact with real-world objects through stylised representations which match the VR scene and thereby preserve immersion. It consists of three stages: First, the object of interest is reconstructed from images and corresponding camera poses recorded with the VR headset, or alternatively a retrieval model finds a fitting mesh from the ShapeNet dataset. Second, a style transfer technique adapts the mesh to the VR game scene in order to preserve consistent immersion. Third, the stylised mesh is superimposed on the real object in real time to ensure interactivity even if the real object is moved. Our pipeline serves as proof of concept for style-aware AV embeddings.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124970472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Reality in Supporting Charitable Giving: The Role of Vicarious Experience, Existential Guilt, and Need for Stimulation 虚拟现实在支持慈善捐赠中的作用:代理经验、存在的罪恶感和刺激的需要
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00079
Ou Li, Han Qiu
Although a growing number of charities have used virtual reality (VR) technology for fundraising activities, with better results than ever before, little research has been undertaken on what factors make VR beneficial in supporting charitable giving. The primary goal of this study is to investigate the underlying mechanism of VR in supporting charitable giving, which extends the current literature on VR and donation behaviors. The findings of this study indicated that VR charitable appeals increase actual money donations when compared to the traditional two-dimensional (2D) format and that this effect is achieved through a serial mediating effect of vicarious experience and existential guilt. Findings also identify the need for stimulation as a boundary condition, indicating that those with a higher (vs. lower) need for stimulation were more (vs. less) affected by the mediating mechanism of VR charitable appeals on donations. This work contributes to our understanding of the relationship between VR technology and charitable giving, as well as to future research on VR and its prosocial applications.
尽管越来越多的慈善机构使用虚拟现实(VR)技术进行筹款活动,效果比以往任何时候都好,但关于VR在支持慈善捐赠方面的有利因素的研究却很少。本研究的主要目的是探讨虚拟现实支持慈善捐赠的潜在机制,这是对现有虚拟现实与捐赠行为研究的延伸。本研究结果表明,与传统二维(2D)形式相比,VR慈善呼吁增加了实际捐款,这种效应是通过替代体验和存在主义内疚的一系列中介效应实现的。研究结果还将刺激需求作为一个边界条件,表明刺激需求高(与低)的人更容易(与低)受到虚拟现实慈善捐赠诉求中介机制的影响。这项工作有助于我们理解VR技术与慈善捐赠之间的关系,也有助于未来对VR及其亲社会应用的研究。
{"title":"Virtual Reality in Supporting Charitable Giving: The Role of Vicarious Experience, Existential Guilt, and Need for Stimulation","authors":"Ou Li, Han Qiu","doi":"10.1109/VR55154.2023.00079","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00079","url":null,"abstract":"Although a growing number of charities have used virtual reality (VR) technology for fundraising activities, with better results than ever before, little research has been undertaken on what factors make VR beneficial in supporting charitable giving. The primary goal of this study is to investigate the underlying mechanism of VR in supporting charitable giving, which extends the current literature on VR and donation behaviors. The findings of this study indicated that VR charitable appeals increase actual money donations when compared to the traditional two-dimensional (2D) format and that this effect is achieved through a serial mediating effect of vicarious experience and existential guilt. Findings also identify the need for stimulation as a boundary condition, indicating that those with a higher (vs. lower) need for stimulation were more (vs. less) affected by the mediating mechanism of VR charitable appeals on donations. This work contributes to our understanding of the relationship between VR technology and charitable giving, as well as to future research on VR and its prosocial applications.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128557465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Delta Path Tracing for Real-Time Global Illumination in Mixed Reality 混合现实中实时全局照明的Delta路径跟踪
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00020
Yang Xu, Yu Jiang, Shibo Wang, Kang Li, Guohua Geng
Visual coherence between real and virtual objects is important in mixed reality (MR), and illumination consistency is one of the key aspects to achieve coherence. Apart from matching the illumination of the virtual objects with the real environments, the change of illumination on the real scenes produced by the inserted virtual objects should also be considered but is difficult to compute in real-time due to the heavy computation demands of global illumination. In this work, we propose delta path tracing (DPT), which only computes the radiance blocked by the virtual objects from the light sources at the primary hit points of Monte Carlo path tracing, then combines the blocked radiance and multi-bounce indirect illumination with the image of the real scene. Multiple importance sampling (MIS) between BRDF and environment map is performed to handle all-frequency environment maps captured by a panorama camera. Compared to conventional differential rendering methods, our method can remarkably reduce the number of times required to access the environment map and avoid rendering scenes twice. Therefore, the performance can be significantly improved. We implement our method using hardware-accelerated ray tracing on modern GPUs, and the results demonstrate that our method can render global illumination at real-time frame rates and produce plausible visual coherence between real and virtual objects in MR environments.
在混合现实(MR)中,真实和虚拟物体之间的视觉一致性非常重要,而照明一致性是实现视觉一致性的关键因素之一。除了将虚拟物体的照度与真实环境相匹配外,还需要考虑虚拟物体所产生的真实场景的照度变化,但由于全局照度的计算量很大,难以实时计算。在这项工作中,我们提出了delta路径跟踪(DPT),它只计算虚拟物体在蒙特卡罗路径跟踪的主要命中点处的光源遮挡的辐射,然后将遮挡的辐射和多反弹间接照明与真实场景的图像相结合。在BRDF和环境地图之间进行多重重要采样(MIS),以处理全景相机捕获的全频率环境地图。与传统的差分渲染方法相比,我们的方法可以显著减少访问环境地图所需的次数,避免两次渲染场景。因此,可以显著提高性能。我们在现代gpu上使用硬件加速光线追踪实现了我们的方法,结果表明我们的方法可以以实时帧速率渲染全局照明,并在MR环境中产生真实和虚拟物体之间的可信视觉一致性。
{"title":"Delta Path Tracing for Real-Time Global Illumination in Mixed Reality","authors":"Yang Xu, Yu Jiang, Shibo Wang, Kang Li, Guohua Geng","doi":"10.1109/VR55154.2023.00020","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00020","url":null,"abstract":"Visual coherence between real and virtual objects is important in mixed reality (MR), and illumination consistency is one of the key aspects to achieve coherence. Apart from matching the illumination of the virtual objects with the real environments, the change of illumination on the real scenes produced by the inserted virtual objects should also be considered but is difficult to compute in real-time due to the heavy computation demands of global illumination. In this work, we propose delta path tracing (DPT), which only computes the radiance blocked by the virtual objects from the light sources at the primary hit points of Monte Carlo path tracing, then combines the blocked radiance and multi-bounce indirect illumination with the image of the real scene. Multiple importance sampling (MIS) between BRDF and environment map is performed to handle all-frequency environment maps captured by a panorama camera. Compared to conventional differential rendering methods, our method can remarkably reduce the number of times required to access the environment map and avoid rendering scenes twice. Therefore, the performance can be significantly improved. We implement our method using hardware-accelerated ray tracing on modern GPUs, and the results demonstrate that our method can render global illumination at real-time frame rates and produce plausible visual coherence between real and virtual objects in MR environments.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121612349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Guardian Awareness Techniques to Promote Safety in Virtual Reality 研究监护人意识技术促进虚拟现实中的安全
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00078
Sixuan Wu, Jiannan Li, Maurício Sousa, Tovi Grossman
Virtual Reality (VR) can completely immerse users in a virtual world and provide little awareness of bystanders in the surrounding physical environment. Current technologies use predefined guardian area visualizations to set safety boundaries for VR interactions. However, bystanders cannot perceive these boundaries and may collide with VR users if they accidentally enter guardian areas. In this paper, we investigate four awareness techniques on mobile phones and smartwatches to help bystanders avoid invading guardian areas. These techniques include augmented reality boundary overlays and visual, auditory, and haptic alerts indicating bystanders' distance from guardians. Our findings suggest that the proposed techniques effectively keep participants clear of the safety boundaries. More specifically, using augmented reality overlays, participants could avoid guardians with less time, and haptic alerts caused less distraction.
虚拟现实(VR)可以让用户完全沉浸在虚拟世界中,而旁观者在周围的物理环境中几乎没有意识。目前的技术使用预定义的监护人区域可视化来为VR交互设置安全边界。然而,旁观者无法感知这些边界,如果他们不小心进入保护区域,可能会与VR用户发生碰撞。在本文中,我们研究了手机和智能手表上的四种感知技术,以帮助旁观者避免入侵守护区域。这些技术包括增强现实边界覆盖和视觉、听觉和触觉警报,表明旁观者与监护人的距离。我们的研究结果表明,所提出的技术有效地使参与者远离安全界限。更具体地说,使用增强现实覆盖,参与者可以用更少的时间避开监护人,触觉警报造成的分心也更少。
{"title":"Investigating Guardian Awareness Techniques to Promote Safety in Virtual Reality","authors":"Sixuan Wu, Jiannan Li, Maurício Sousa, Tovi Grossman","doi":"10.1109/VR55154.2023.00078","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00078","url":null,"abstract":"Virtual Reality (VR) can completely immerse users in a virtual world and provide little awareness of bystanders in the surrounding physical environment. Current technologies use predefined guardian area visualizations to set safety boundaries for VR interactions. However, bystanders cannot perceive these boundaries and may collide with VR users if they accidentally enter guardian areas. In this paper, we investigate four awareness techniques on mobile phones and smartwatches to help bystanders avoid invading guardian areas. These techniques include augmented reality boundary overlays and visual, auditory, and haptic alerts indicating bystanders' distance from guardians. Our findings suggest that the proposed techniques effectively keep participants clear of the safety boundaries. More specifically, using augmented reality overlays, participants could avoid guardians with less time, and haptic alerts caused less distraction.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"26 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114030910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iARVis: Mobile AR Based Declarative Information Visualization Authoring, Exploring and Sharing iARVis:基于移动增强现实的声明性信息可视化创作、探索和共享
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00017
Junjie Chen, Chenhui Li, Sicheng Song, Changbo Wang
We present iARVis, a proof-of-concept toolkit for creating, experiencing, and sharing mobile AR-based information visualization environments. Over the past years, AR has emerged as a promising medium for information and data visualization beyond the physical media and the desktop, enabling interactivity and eliminating spatial limits. However, the creation of such environments remains difficult and frequently necessitates low-level programming expertise and lengthy hand encodings. We present a declarative approach for defining the augmented reality (AR) environment, including how information is automatically positioned, laid out, and interacted with, to improve the efficiency and flexibility of constructing AR-based information visualization environments. We provide fundamental layout and visual components such as the grid, rich text, images, and charts for the development of complex visualization widgets, as well as automatic targeting methods based on image and object tracking for the development of the AR environment. To increase design efficiency, we also provide features such as hot-reload and several creation levels for both novice and advanced users. We also investigate how the augmented reality-based visualization environment could persist and be shared through the internet and provide ways for storing, sharing, and restoring the environment to give a continuous and seamless experience. To demonstrate the viability and extensibility, we evaluate iARVis using a variety of use cases along with performance evaluation and expert reviews.
我们提出了iARVis,一个概念验证工具包,用于创建、体验和共享基于ar的移动信息可视化环境。在过去的几年里,AR已经成为超越物理媒介和桌面的信息和数据可视化的有前途的媒介,实现了交互性并消除了空间限制。然而,创建这样的环境仍然很困难,并且经常需要低级的编程专业知识和冗长的手工编码。我们提出了一种用于定义增强现实(AR)环境的声明性方法,包括信息如何自动定位、布局和交互,以提高构建基于AR的信息可视化环境的效率和灵活性。我们提供了基本的布局和可视化组件,如网格、富文本、图像和图表,用于开发复杂的可视化小部件,以及基于图像和对象跟踪的自动定位方法,用于开发AR环境。为了提高设计效率,我们还为新手和高级用户提供了热加载和几个创建级别等功能。我们还研究了基于增强现实的可视化环境如何通过互联网持续和共享,并提供存储、共享和恢复环境的方法,以提供连续和无缝的体验。为了演示可行性和可扩展性,我们使用各种用例以及性能评估和专家评审来评估iARVis。
{"title":"iARVis: Mobile AR Based Declarative Information Visualization Authoring, Exploring and Sharing","authors":"Junjie Chen, Chenhui Li, Sicheng Song, Changbo Wang","doi":"10.1109/VR55154.2023.00017","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00017","url":null,"abstract":"We present iARVis, a proof-of-concept toolkit for creating, experiencing, and sharing mobile AR-based information visualization environments. Over the past years, AR has emerged as a promising medium for information and data visualization beyond the physical media and the desktop, enabling interactivity and eliminating spatial limits. However, the creation of such environments remains difficult and frequently necessitates low-level programming expertise and lengthy hand encodings. We present a declarative approach for defining the augmented reality (AR) environment, including how information is automatically positioned, laid out, and interacted with, to improve the efficiency and flexibility of constructing AR-based information visualization environments. We provide fundamental layout and visual components such as the grid, rich text, images, and charts for the development of complex visualization widgets, as well as automatic targeting methods based on image and object tracking for the development of the AR environment. To increase design efficiency, we also provide features such as hot-reload and several creation levels for both novice and advanced users. We also investigate how the augmented reality-based visualization environment could persist and be shared through the internet and provide ways for storing, sharing, and restoring the environment to give a continuous and seamless experience. To demonstrate the viability and extensibility, we evaluate iARVis using a variety of use cases along with performance evaluation and expert reviews.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121836985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RemoteTouch: Enhancing Immersive 3D Video Communication with Hand Touch RemoteTouch:通过手部触摸增强身临其境的3D视频通信
Pub Date : 2023-02-28 DOI: 10.1109/VR55154.2023.00016
Yizhong Zhang, Zhiqi Li, Sicheng Xu, Chong Li, Jiaolong Yang, Xin Tong, B. Guo
Recent research advance has significantly improved the visual real-ism of immersive 3D video communication. In this work we present a method to further enhance this immersive experience by adding the hand touch capability (“remote hand clapping”). In our system, each meeting participant sits in front of a large screen with haptic feedback. The local participant can reach his hand out to the screen and perform hand clapping with the remote participant as if the two participants were only separated by a virtual glass. A key challenge in emulating the remote hand touch is the realistic rendering of the participant's hand and arm as the hand touches the screen. When the hand is very close to the screen, the RGBD data required for realistic rendering is no longer available. To tackle this challenge, we present a dual representation of the user's hand. Our dual representation not only preserves the high-quality rendering usually found in recent image-based rendering systems but also allows the hand to reach to the screen. This is possible because the dual representation includes both an image-based model and a 3D geometry-based model, with the latter driven by a hand skeleton tracked by a side view camera. In addition, the dual representation provides a distance-based fusion of the image-based and 3D geometry-based models as the hand moves closer to the screen. The result is that the image-based and 3D geometry-based models mutually enhance each other, leading to realistic and seamless rendering. Our experiments demonstrate that our method provides consistent hand contact experience between remote users and improves the immersive experience of 3D video communication.
近年来的研究进展大大提高了沉浸式三维视频通信的视觉真实感。在这项工作中,我们提出了一种通过增加手部触摸功能(“远程拍手”)来进一步增强这种沉浸式体验的方法。在我们的系统中,每个会议参与者都坐在一个带有触觉反馈的大屏幕前。本地参与者可以把手伸到屏幕上,与远程参与者一起拍手,就好像两个参与者之间只隔着一个虚拟的杯子。模拟远程手触摸的一个关键挑战是当手触摸屏幕时,参与者的手和手臂的逼真渲染。当手非常接近屏幕时,真实渲染所需的RGBD数据不再可用。为了解决这一挑战,我们提出了用户手的双重表示。我们的双重表示不仅保留了最近基于图像的渲染系统中常见的高质量渲染,而且还允许手伸向屏幕。这是可能的,因为双重表示包括基于图像的模型和基于3D几何的模型,后者由侧视摄像头跟踪的手骨架驱动。此外,当手靠近屏幕时,双重表示提供了基于距离的图像模型和基于3D几何模型的融合。结果是基于图像的模型和基于三维几何的模型相互增强,从而实现逼真无缝的渲染。我们的实验表明,我们的方法提供了远程用户之间一致的手部接触体验,提高了3D视频通信的沉浸式体验。
{"title":"RemoteTouch: Enhancing Immersive 3D Video Communication with Hand Touch","authors":"Yizhong Zhang, Zhiqi Li, Sicheng Xu, Chong Li, Jiaolong Yang, Xin Tong, B. Guo","doi":"10.1109/VR55154.2023.00016","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00016","url":null,"abstract":"Recent research advance has significantly improved the visual real-ism of immersive 3D video communication. In this work we present a method to further enhance this immersive experience by adding the hand touch capability (“remote hand clapping”). In our system, each meeting participant sits in front of a large screen with haptic feedback. The local participant can reach his hand out to the screen and perform hand clapping with the remote participant as if the two participants were only separated by a virtual glass. A key challenge in emulating the remote hand touch is the realistic rendering of the participant's hand and arm as the hand touches the screen. When the hand is very close to the screen, the RGBD data required for realistic rendering is no longer available. To tackle this challenge, we present a dual representation of the user's hand. Our dual representation not only preserves the high-quality rendering usually found in recent image-based rendering systems but also allows the hand to reach to the screen. This is possible because the dual representation includes both an image-based model and a 3D geometry-based model, with the latter driven by a hand skeleton tracked by a side view camera. In addition, the dual representation provides a distance-based fusion of the image-based and 3D geometry-based models as the hand moves closer to the screen. The result is that the image-based and 3D geometry-based models mutually enhance each other, leading to realistic and seamless rendering. Our experiments demonstrate that our method provides consistent hand contact experience between remote users and improves the immersive experience of 3D video communication.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115629601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Recognition of In-Place Body Actions and Head Gestures using Only a Head-Mounted Display 仅使用头戴式显示器实时识别原位身体动作和头部手势
Pub Date : 2023-02-25 DOI: 10.1109/VR55154.2023.00026
Jingbo Zhao, Mingjun Shao, Yaojun Wang, Ruolin Xu
Body actions and head gestures are natural interfaces for interaction in virtual environments. Existing methods for in-place body action recognition often require hardware more than a head-mounted display (HMD), making body action interfaces difficult to be introduced to ordinary virtual reality (VR) users as they usually only possess an HMD. In addition, there lacks a unified solution to recognize in-place body actions and head gestures. This potentially hinders the exploration of the use of in-place body actions and head gestures for novel interaction experiences in virtual environments. We present a unified two-stream 1-D convolutional neural network (CNN) for recognition of body actions when a user performs walking-in-place (WIP) and for recognition of head gestures when a user stands still wearing only an HMD. Compared to previous approaches, our method does not require specialized hardware and/or additional tracking devices other than an HMD and can recognize a significantly larger number of body actions and head gestures than other existing methods. In total, ten in-place body actions and eight head gestures can be recognized with the proposed method, which makes this method a readily available body action interface (head gestures included) for interaction with virtual environments. We demonstrate one utility of the interface through a virtual locomotion task. Results show that the present body action interface is reliable in detecting body actions for the VR locomotion task but is physically demanding compared to a touch controller interface. The present body action interface is promising for new VR experiences and applications, especially for VR fitness applications where workouts are intended.
在虚拟环境中,身体动作和头部手势是交互的自然界面。现有的原位身体动作识别方法通常需要硬件而不是头戴式显示器(HMD),这使得身体动作界面很难介绍给普通虚拟现实(VR)用户,因为他们通常只拥有一个头戴式显示器。此外,缺乏一个统一的解决方案来识别原地的身体动作和头部手势。这可能会阻碍在虚拟环境中使用原位身体动作和头部手势进行新颖交互体验的探索。我们提出了一个统一的两流一维卷积神经网络(CNN),用于识别用户在原地行走(WIP)时的身体动作,以及当用户只戴着头戴式头盔站立不动时的头部手势。与以前的方法相比,我们的方法不需要专门的硬件和/或除HMD以外的其他跟踪设备,并且可以识别比其他现有方法多得多的身体动作和头部手势。总的来说,该方法可以识别10个原地身体动作和8个头部手势,这使得该方法成为与虚拟环境交互的一个现成的身体动作界面(包括头部手势)。我们通过虚拟运动任务演示了该接口的一个实用程序。结果表明,目前的身体动作界面在检测VR运动任务中的身体动作方面是可靠的,但与触摸控制器界面相比,对身体的要求较高。目前的身体动作界面对于新的VR体验和应用来说是很有希望的,特别是对于想要锻炼的VR健身应用来说。
{"title":"Real-Time Recognition of In-Place Body Actions and Head Gestures using Only a Head-Mounted Display","authors":"Jingbo Zhao, Mingjun Shao, Yaojun Wang, Ruolin Xu","doi":"10.1109/VR55154.2023.00026","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00026","url":null,"abstract":"Body actions and head gestures are natural interfaces for interaction in virtual environments. Existing methods for in-place body action recognition often require hardware more than a head-mounted display (HMD), making body action interfaces difficult to be introduced to ordinary virtual reality (VR) users as they usually only possess an HMD. In addition, there lacks a unified solution to recognize in-place body actions and head gestures. This potentially hinders the exploration of the use of in-place body actions and head gestures for novel interaction experiences in virtual environments. We present a unified two-stream 1-D convolutional neural network (CNN) for recognition of body actions when a user performs walking-in-place (WIP) and for recognition of head gestures when a user stands still wearing only an HMD. Compared to previous approaches, our method does not require specialized hardware and/or additional tracking devices other than an HMD and can recognize a significantly larger number of body actions and head gestures than other existing methods. In total, ten in-place body actions and eight head gestures can be recognized with the proposed method, which makes this method a readily available body action interface (head gestures included) for interaction with virtual environments. We demonstrate one utility of the interface through a virtual locomotion task. Results show that the present body action interface is reliable in detecting body actions for the VR locomotion task but is physically demanding compared to a touch controller interface. The present body action interface is promising for new VR experiences and applications, especially for VR fitness applications where workouts are intended.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129330030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An EEG-based Experiment on VR Sickness and Postural Instability While Walking in Virtual Environments 基于脑电图的虚拟环境中行走时VR疾病和姿势不稳定实验
Pub Date : 2023-02-22 DOI: 10.1109/VR55154.2023.00025
C. A. T. Cortes, Chin-Teng Lin, Tien-Thong Nguyen Do, Hsiang-Ting Chen
Previous studies showed that natural walking reduces the susceptibility to VR sickness. However, many users still experience VR sickness when wearing VR headsets that allow free walking in room-scale spaces. This paper studies VR sickness and postural instability while the user walks in an immersive virtual environment using an electroencephalogram (EEG) headset and a full-body motion capture system. The experiment induced VR sickness by gradually increasing the translation gain beyond the user's detection threshold. A between-group comparison between participants with and without VR sickness symptoms found some significant differences in postural stability but found none on gait patterns during the walking. In the EEG analysis, the group with VR sickness showed a reduction of alpha power, a phenomenon previously linked to a higher workload and efforts to maintain postural control. In contrast, the group without VR sickness exhibited brain activities linked to fine cognitive-motor control. The EEG result provides new insights into the postural instability theory: participants with VR sickness could maintain their postural stability at the cost of a higher cognitive workload. Our result also indicates that the analysis of lower-frequency power could complement behavioral data for continuous VR sickness detection in both stationary and mobile VR setups.
先前的研究表明,自然行走可以降低对VR疾病的易感性。然而,许多用户在佩戴VR头显时仍然会感到眩晕,因为VR头显可以在房间大小的空间中自由行走。本文使用脑电图(EEG)头戴设备和全身运动捕捉系统研究用户在沉浸式虚拟环境中行走时的VR疾病和姿势不稳定。实验通过逐渐增加超过用户检测阈值的翻译增益来诱导VR病。有VR疾病症状和没有VR疾病症状的参与者之间的组间比较发现,姿势稳定性存在一些显著差异,但在行走过程中的步态模式上没有发现任何差异。在脑电图分析中,患有虚拟现实疾病的一组显示出α能力的降低,这一现象之前与更高的工作量和保持姿势控制的努力有关。相比之下,没有VR疾病的那一组表现出与精细认知运动控制相关的大脑活动。脑电图结果为姿势不稳定理论提供了新的见解:患有虚拟现实疾病的参与者可以以更高的认知工作量为代价来维持他们的姿势稳定性。我们的研究结果还表明,低频功率的分析可以补充行为数据,用于在固定和移动VR设置中持续进行VR疾病检测。
{"title":"An EEG-based Experiment on VR Sickness and Postural Instability While Walking in Virtual Environments","authors":"C. A. T. Cortes, Chin-Teng Lin, Tien-Thong Nguyen Do, Hsiang-Ting Chen","doi":"10.1109/VR55154.2023.00025","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00025","url":null,"abstract":"Previous studies showed that natural walking reduces the susceptibility to VR sickness. However, many users still experience VR sickness when wearing VR headsets that allow free walking in room-scale spaces. This paper studies VR sickness and postural instability while the user walks in an immersive virtual environment using an electroencephalogram (EEG) headset and a full-body motion capture system. The experiment induced VR sickness by gradually increasing the translation gain beyond the user's detection threshold. A between-group comparison between participants with and without VR sickness symptoms found some significant differences in postural stability but found none on gait patterns during the walking. In the EEG analysis, the group with VR sickness showed a reduction of alpha power, a phenomenon previously linked to a higher workload and efforts to maintain postural control. In contrast, the group without VR sickness exhibited brain activities linked to fine cognitive-motor control. The EEG result provides new insights into the postural instability theory: participants with VR sickness could maintain their postural stability at the cost of a higher cognitive workload. Our result also indicates that the analysis of lower-frequency power could complement behavioral data for continuous VR sickness detection in both stationary and mobile VR setups.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114466004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration MAGIC:操纵虚拟形象和手势以改善远程协作
Pub Date : 2023-02-15 DOI: 10.1109/VR55154.2023.00059
Catarina G. Fidalgo, Maurício Sousa, Daniel Mendes, R. K. D. Anjos, Daniel Medeiros, K. Singh, Joaquim Jorge
Remote collaborative work has become pervasive in many settings, ranging from engineering to medical professions. Users are im-mersed in virtual environments and communicate through life-sized avatars that enable face-to-face collaboration. Within this context, users often collaboratively view and interact with virtual 3D models, for example to assist in the design of new devices such as cus-tomized prosthetics, vehicles or buildings. Discussing such shared 3D content face-to-face, however, has a variety of challenges such as ambiguities, occlusions, and different viewpoints that all decrease mutual awareness, which in turn leads to decreased task performance and increased errors. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space, improving mutual understanding and awareness. Our approach distorts the remote user's gestures to correctly reflect them in the local user's reference space when face-to-face. To measure what two users perceive in common when using pointing gestures in a shared 3D space, we introduce a novel metric called pointing agreement. Results from a user study suggest that MAGIC significantly improves pointing agreement in face-to-face collaboration settings, improving co-presence and awareness of interactions performed in the shared space. We believe that MAGIC improves remote collaboration by enabling simpler communication mechanisms and better mutual awareness.
远程协作工作在许多环境中已经变得普遍,从工程到医疗专业。用户沉浸在虚拟环境中,通过真人大小的化身进行交流,从而实现面对面的协作。在这种情况下,用户经常协同查看虚拟3D模型并与之交互,例如协助设计定制假肢、车辆或建筑物等新设备。然而,面对面讨论这种共享的3D内容有各种各样的挑战,如歧义、遮挡和不同的观点,这些都会降低相互意识,从而导致任务性能下降和错误增加。为了应对这一挑战,我们引入了MAGIC,这是一种新的方法,用于理解面对面共享3D空间中的指向手势,提高相互理解和意识。我们的方法扭曲了远程用户的手势,以便在面对面时正确地反映在本地用户的参考空间中。为了测量两个用户在共享的3D空间中使用指向手势时所感知到的共同点,我们引入了一个称为指向一致性的新度量。一项用户研究的结果表明,MAGIC显著提高了面对面协作环境中的指向一致性,提高了在共享空间中进行的共同存在和交互意识。我们相信MAGIC通过简化通信机制和更好的相互意识来改善远程协作。
{"title":"MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration","authors":"Catarina G. Fidalgo, Maurício Sousa, Daniel Mendes, R. K. D. Anjos, Daniel Medeiros, K. Singh, Joaquim Jorge","doi":"10.1109/VR55154.2023.00059","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00059","url":null,"abstract":"Remote collaborative work has become pervasive in many settings, ranging from engineering to medical professions. Users are im-mersed in virtual environments and communicate through life-sized avatars that enable face-to-face collaboration. Within this context, users often collaboratively view and interact with virtual 3D models, for example to assist in the design of new devices such as cus-tomized prosthetics, vehicles or buildings. Discussing such shared 3D content face-to-face, however, has a variety of challenges such as ambiguities, occlusions, and different viewpoints that all decrease mutual awareness, which in turn leads to decreased task performance and increased errors. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space, improving mutual understanding and awareness. Our approach distorts the remote user's gestures to correctly reflect them in the local user's reference space when face-to-face. To measure what two users perceive in common when using pointing gestures in a shared 3D space, we introduce a novel metric called pointing agreement. Results from a user study suggest that MAGIC significantly improves pointing agreement in face-to-face collaboration settings, improving co-presence and awareness of interactions performed in the shared space. We believe that MAGIC improves remote collaboration by enabling simpler communication mechanisms and better mutual awareness.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121496720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1