首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
The Hidden Face of the Proteus Effect: Deindividuation, Embodiment and Identification. 普洛特斯效应的隐藏面:去个性化、体现和认同。
Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549849
Anna Martin Coesel, Beatrice Biancardi, Mukesh Barange, Stephanie Buisine

The Proteus effect describes how users of virtual environments adjust their attitudes to match stereotypes associated with their avatar's appearance. While numerous studies have demonstrated this phenomenon's reliability, its underlying processes remain poorly understood. This work investigates deindividuation's hypothesized but unproven role within the Proteus effect. Deindividuated individuals tend to follow situational norms rather than personal ones. Therefore, together with high embodiment and identification processes, deindividuation may lead to a stronger Proteus effect. We present two experimental studies. First, we demonstrated the emergence of the Proteus effect in a real-world academic context: engineering students got better scores in a statistical task when embodying Albert Einstein's avatar compared to a control one. In the second study, we tested the role of deindividuation by manipulating participants' exposure to different identity cues during the task. While we could not find a significant effect of deindividuation on the participants' performance, our results highlight an unexpected pattern, with embodiment as a negative predictor and identification as a positive predictor of performance. These results open avenues for further research on the processes involved in the Proteus effect, particularly those focused on the relation between the avatar and the nature of the task to be performed. All supplemental materials are available at https://osf.io/au3wk/.

{"title":"The Hidden Face of the Proteus Effect: Deindividuation, Embodiment and Identification.","authors":"Anna Martin Coesel, Beatrice Biancardi, Mukesh Barange, Stephanie Buisine","doi":"10.1109/TVCG.2025.3549849","DOIUrl":"10.1109/TVCG.2025.3549849","url":null,"abstract":"<p><p>The Proteus effect describes how users of virtual environments adjust their attitudes to match stereotypes associated with their avatar's appearance. While numerous studies have demonstrated this phenomenon's reliability, its underlying processes remain poorly understood. This work investigates deindividuation's hypothesized but unproven role within the Proteus effect. Deindividuated individuals tend to follow situational norms rather than personal ones. Therefore, together with high embodiment and identification processes, deindividuation may lead to a stronger Proteus effect. We present two experimental studies. First, we demonstrated the emergence of the Proteus effect in a real-world academic context: engineering students got better scores in a statistical task when embodying Albert Einstein's avatar compared to a control one. In the second study, we tested the role of deindividuation by manipulating participants' exposure to different identity cues during the task. While we could not find a significant effect of deindividuation on the participants' performance, our results highlight an unexpected pattern, with embodiment as a negative predictor and identification as a positive predictor of performance. These results open avenues for further research on the processes involved in the Proteus effect, particularly those focused on the relation between the avatar and the nature of the task to be performed. All supplemental materials are available at https://osf.io/au3wk/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SummonBrush: Enhancing Touch Interaction on Large XR User Interfaces by Augmenting Users' Hands with Virtual Brushes. SummonBrush:用虚拟画笔增强用户的手,从而增强大型 XR 用户界面上的触摸交互。
Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549553
Yang Tian, Zhao Su, Tianren Luo, Teng Han, Shengdong Zhao, Youpeng Zhang, Yixin Wang, BoYu Gao, Dangxiao Wang

Touch interaction is one of the fundamental interaction paradigms in XR, as users have become very familiar with touch interactions on physical touchscreens. However, users typically need to perform extensive arm movements for engaging with XR user interfaces much larger than mobile device touchscreens. We propose the SummonBrush technique to facilitate easy access to hidden windows while interacting with large XR user interfaces, requiring minimal arm movements. The SummonBrush technique adds a virtual brush to the index fingertip of a user's hand. Upon making contact with a virtual user interface, the brush bends and diverges and ink starts to diffuse in it. The more the brush bends and diverges, the more the ink diffuses. The user can summon hidden windows or background applications in situ, which is achieved by firstly pressing the brush against the user interface to make ink fully fill the brush and then perform swipe gestures. Also, the user can press the brush against the thumbtails of background applications in situ to quickly cycle them through. Ecological studies showed that SummonBrush significantly reduced the arm movement time by 39% and 34% in summoning hidden windows and activating/closing background applications, respectively, leading to a significant decrease in reported physical demand.

{"title":"SummonBrush: Enhancing Touch Interaction on Large XR User Interfaces by Augmenting Users' Hands with Virtual Brushes.","authors":"Yang Tian, Zhao Su, Tianren Luo, Teng Han, Shengdong Zhao, Youpeng Zhang, Yixin Wang, BoYu Gao, Dangxiao Wang","doi":"10.1109/TVCG.2025.3549553","DOIUrl":"10.1109/TVCG.2025.3549553","url":null,"abstract":"<p><p>Touch interaction is one of the fundamental interaction paradigms in XR, as users have become very familiar with touch interactions on physical touchscreens. However, users typically need to perform extensive arm movements for engaging with XR user interfaces much larger than mobile device touchscreens. We propose the SummonBrush technique to facilitate easy access to hidden windows while interacting with large XR user interfaces, requiring minimal arm movements. The SummonBrush technique adds a virtual brush to the index fingertip of a user's hand. Upon making contact with a virtual user interface, the brush bends and diverges and ink starts to diffuse in it. The more the brush bends and diverges, the more the ink diffuses. The user can summon hidden windows or background applications in situ, which is achieved by firstly pressing the brush against the user interface to make ink fully fill the brush and then perform swipe gestures. Also, the user can press the brush against the thumbtails of background applications in situ to quickly cycle them through. Ecological studies showed that SummonBrush significantly reduced the arm movement time by 39% and 34% in summoning hidden windows and activating/closing background applications, respectively, leading to a significant decrease in reported physical demand.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scaling Techniques for Exocentric Navigation Interfaces in Multiscale Virtual Environments.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549535
Jong-In Lee, Wolfgang Stuerzlinger

Navigating multiscale virtual environments necessitates an interaction method to travel across different levels of scale (LoS). Prior research has studied various techniques that enable users to seamlessly adjust their scale to navigate between different LoS based on specific user contexts. We introduce a scroll-based scale control method optimized for exocentric navigation, targeted at scenarios where speed and accuracy in continuous scaling are crucial. We pinpoint the challenges of scale control in settings with multiple LoS and evaluate how distinct designs of scaling techniques influence navigation performance and usability. Through a user study, we investigated two pivotal elements of a scaling technique: the input method and the scaling center. Our findings indicate that our scroll-based input method significantly reduces task completion time and error rate and enhances efficiency compared to the most frequently used bi-manual method. Moreover, we found that the choice of scaling center affects the ease of use of the scaling method, especially when paired with specific input methods.

{"title":"Scaling Techniques for Exocentric Navigation Interfaces in Multiscale Virtual Environments.","authors":"Jong-In Lee, Wolfgang Stuerzlinger","doi":"10.1109/TVCG.2025.3549535","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549535","url":null,"abstract":"<p><p>Navigating multiscale virtual environments necessitates an interaction method to travel across different levels of scale (LoS). Prior research has studied various techniques that enable users to seamlessly adjust their scale to navigate between different LoS based on specific user contexts. We introduce a scroll-based scale control method optimized for exocentric navigation, targeted at scenarios where speed and accuracy in continuous scaling are crucial. We pinpoint the challenges of scale control in settings with multiple LoS and evaluate how distinct designs of scaling techniques influence navigation performance and usability. Through a user study, we investigated two pivotal elements of a scaling technique: the input method and the scaling center. Our findings indicate that our scroll-based input method significantly reduces task completion time and error rate and enhances efficiency compared to the most frequently used bi-manual method. Moreover, we found that the choice of scaling center affects the ease of use of the scaling method, especially when paired with specific input methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain Signatures of Time Perception in Virtual Reality.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549570
Sahar Niknam, Saravanakumar Duraisamy, Jean Botev, Luis A Leiva

Achieving a high level of immersion and adaptation in virtual reality (VR) requires precise measurement and representation of user state. While extrinsic physical characteristics such as locomotion and pose can be accurately tracked in real-time, reliably capturing mental states is more challenging. Quantitative psychology allows considering more intrinsic features like emotion, attention, or cognitive load. Time perception, in particular, is strongly tied to users' mental states, including stress, focus, and boredom. However, research on objectively measuring the pace at which we perceive the passage of time is scarce. In this work, we investigate the potential of electroencephalography (EEG) as an objective measure of time perception in VR, exploring neural correlates with oscillatory responses and time-frequency analysis. To this end, we implemented a variety of time perception modulators in VR, collected EEG recordings, and labeled them with overestimation, correct estimation, and underestimation time perception states. We found clear EEG spectral signatures for these three states, that are persistent across individuals, modulators, and modulation duration. These signatures can be integrated and applied to monitor and actively influence time perception in VR, allowing the virtual environment to be purposefully adapted to the individual to increase immersion further and improve user experience. A free copy of this paper and all supplemental materials are available at https://vrarlab.uni.lu/pub/brain-signatures.

要在虚拟现实(VR)中实现高水平的沉浸感和适应性,需要精确测量和呈现用户状态。虽然运动和姿势等外在物理特征可以被实时精确地跟踪,但可靠地捕捉心理状态则更具挑战性。定量心理学允许考虑更多内在特征,如情绪、注意力或认知负荷。时间感知尤其与用户的心理状态密切相关,包括压力、注意力和无聊感。然而,客观测量我们感知时间流逝的速度的研究却很少。在这项工作中,我们研究了脑电图(EEG)作为 VR 中时间感知客观测量方法的潜力,探索了振荡响应和时间频率分析的神经相关性。为此,我们在 VR 中实施了多种时间感知调节器,收集了脑电图记录,并将其标记为高估、正确估计和低估时间感知状态。我们发现这三种状态都有明显的脑电图频谱特征,而且在不同的个体、调制器和调制持续时间中都会持续存在。这些特征可以整合并应用于监测和积极影响 VR 中的时间感知,从而使虚拟环境有目的地适应个人,进一步增强沉浸感,改善用户体验。本文及所有补充材料的免费拷贝可从 https://vrarlab.uni.lu/pub/brain-signatures 网站获取。
{"title":"Brain Signatures of Time Perception in Virtual Reality.","authors":"Sahar Niknam, Saravanakumar Duraisamy, Jean Botev, Luis A Leiva","doi":"10.1109/TVCG.2025.3549570","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549570","url":null,"abstract":"<p><p>Achieving a high level of immersion and adaptation in virtual reality (VR) requires precise measurement and representation of user state. While extrinsic physical characteristics such as locomotion and pose can be accurately tracked in real-time, reliably capturing mental states is more challenging. Quantitative psychology allows considering more intrinsic features like emotion, attention, or cognitive load. Time perception, in particular, is strongly tied to users' mental states, including stress, focus, and boredom. However, research on objectively measuring the pace at which we perceive the passage of time is scarce. In this work, we investigate the potential of electroencephalography (EEG) as an objective measure of time perception in VR, exploring neural correlates with oscillatory responses and time-frequency analysis. To this end, we implemented a variety of time perception modulators in VR, collected EEG recordings, and labeled them with overestimation, correct estimation, and underestimation time perception states. We found clear EEG spectral signatures for these three states, that are persistent across individuals, modulators, and modulation duration. These signatures can be integrated and applied to monitor and actively influence time perception in VR, allowing the virtual environment to be purposefully adapted to the individual to increase immersion further and improve user experience. A free copy of this paper and all supplemental materials are available at https://vrarlab.uni.lu/pub/brain-signatures.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549887
Peter Kullmann, Theresa Schell, Timo Menzel, Mario Botsch, Marc Erich Latoschik

Facial expressions are crucial for many eXtended Reality (XR) use cases, from mirrored self exposures to social XR, where users interact via their avatars as digital alter egos. However, current XR devices differ in sensor coverage of the face region. Hence, a faithful reconstruction of facial expressions either has to exclude these areas or synthesize missing animation data with model-based approaches, potentially leading to perceivable mismatches between executed and perceived expression. This paper investigates potential effects of the coverage of facial animations (none, partial, or whole) on important factors of self-perception. We exposed 83 participants to their mirrored personalized avatar. They were shown their mirrored avatar face with upper and lower face animation, upper face animation only, lower face animation only, or no face animation. Whole animations were rated higher in virtual embodiment and slightly lower in uncanniness. Missing animations did not differ from partial ones in terms of virtual embodiment. Contrasts showed significantly lower humanness, lower eeriness, and lower attractiveness for the partial conditions. For questions related to self-identification, effects were mixed. We discuss participants' shift in body part attention across conditions. Qualitative results show participants perceived their virtual representation as fascinating yet uncanny.

面部表情对于许多扩展现实(XR)应用案例至关重要,从镜像自曝到社交 XR,用户都可以通过化身作为数字 "另一个自我 "进行互动。然而,目前的 XR 设备在面部区域的传感器覆盖范围上存在差异。因此,要想忠实地重建面部表情,要么排除这些区域,要么使用基于模型的方法合成缺失的动画数据,这可能会导致执行表情与感知表情之间出现可感知的不匹配。本文研究了面部动画覆盖范围(无、部分或全部)对自我感知重要因素的潜在影响。我们向 83 名参与者展示了他们的镜像个性化头像。我们向他们展示了带有上下脸部动画、只有上脸部动画、只有下脸部动画或没有脸部动画的镜像化身脸部。完整动画的虚拟体现度较高,而不可视度稍低。在虚拟形象方面,缺失动画与部分动画没有区别。对比显示,部分动画的人性化程度、阴森恐怖程度和吸引力都明显较低。在与自我认同相关的问题上,效果参差不齐。我们讨论了参与者在不同条件下对身体部位注意力的转移。定性结果表明,参与者认为自己的虚拟形象既迷人又不可思议。
{"title":"Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness.","authors":"Peter Kullmann, Theresa Schell, Timo Menzel, Mario Botsch, Marc Erich Latoschik","doi":"10.1109/TVCG.2025.3549887","DOIUrl":"10.1109/TVCG.2025.3549887","url":null,"abstract":"<p><p>Facial expressions are crucial for many eXtended Reality (XR) use cases, from mirrored self exposures to social XR, where users interact via their avatars as digital alter egos. However, current XR devices differ in sensor coverage of the face region. Hence, a faithful reconstruction of facial expressions either has to exclude these areas or synthesize missing animation data with model-based approaches, potentially leading to perceivable mismatches between executed and perceived expression. This paper investigates potential effects of the coverage of facial animations (none, partial, or whole) on important factors of self-perception. We exposed 83 participants to their mirrored personalized avatar. They were shown their mirrored avatar face with upper and lower face animation, upper face animation only, lower face animation only, or no face animation. Whole animations were rated higher in virtual embodiment and slightly lower in uncanniness. Missing animations did not differ from partial ones in terms of virtual embodiment. Contrasts showed significantly lower humanness, lower eeriness, and lower attractiveness for the partial conditions. For questions related to self-identification, effects were mixed. We discuss participants' shift in body part attention across conditions. Qualitative results show participants perceived their virtual representation as fascinating yet uncanny.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Layer Gaussian Splatting for Immersive Anatomy Visualization.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549882
Constantin Kleinbeck, Hannah Schieber, Klaus Engel, Ralf Gutjahr, Daniel Roth

In medical image visualization, path tracing of volumetric medical data like computed tomography (CT) scans produces lifelike three-dimensional visualizations. Immersive virtual reality (VR) displays can further enhance the understanding of complex anatomies. Going beyond the diagnostic quality of traditional 2D slices, they enable interactive 3D evaluation of anatomies, supporting medical education and planning. Rendering high-quality visualizations in real-time, however, is computationally intensive and impractical for compute-constrained devices like mobile headsets. We propose a novel approach utilizing Gaussian Splatting (GS) to create an efficient but static intermediate representation of CT scans. We introduce a layered GS representation, incrementally including different anatomical structures while minimizing overlap and extending the GS training to remove inactive Gaussians. We further compress the created model with clustering across layers. Our approach achieves interactive frame rates while preserving anatomical structures, with quality adjustable to the target hardware. Compared to standard GS, our representation retains some of the explorative qualities initially enabled by immersive path tracing. Selective activation and clipping of layers are possible at rendering time, adding a degree of interactivity to otherwise static GS models. This could enable scenarios where high computational demands would otherwise prohibit using path-traced medical volumes.

{"title":"Multi-Layer Gaussian Splatting for Immersive Anatomy Visualization.","authors":"Constantin Kleinbeck, Hannah Schieber, Klaus Engel, Ralf Gutjahr, Daniel Roth","doi":"10.1109/TVCG.2025.3549882","DOIUrl":"10.1109/TVCG.2025.3549882","url":null,"abstract":"<p><p>In medical image visualization, path tracing of volumetric medical data like computed tomography (CT) scans produces lifelike three-dimensional visualizations. Immersive virtual reality (VR) displays can further enhance the understanding of complex anatomies. Going beyond the diagnostic quality of traditional 2D slices, they enable interactive 3D evaluation of anatomies, supporting medical education and planning. Rendering high-quality visualizations in real-time, however, is computationally intensive and impractical for compute-constrained devices like mobile headsets. We propose a novel approach utilizing Gaussian Splatting (GS) to create an efficient but static intermediate representation of CT scans. We introduce a layered GS representation, incrementally including different anatomical structures while minimizing overlap and extending the GS training to remove inactive Gaussians. We further compress the created model with clustering across layers. Our approach achieves interactive frame rates while preserving anatomical structures, with quality adjustable to the target hardware. Compared to standard GS, our representation retains some of the explorative qualities initially enabled by immersive path tracing. Selective activation and clipping of layers are possible at rendering time, adding a degree of interactivity to otherwise static GS models. This could enable scenarios where high computational demands would otherwise prohibit using path-traced medical volumes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating Stereo Rendering via Image Reprojection and Spatio-Temporal Supersampling.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549557
Sipeng Yang, Junhao Zhuge, Jiayu Ji, Qingchuan Zhu, Xiaogang JinZ

Achieving immersive virtual reality (VR) experiences typically requires extensive computational resources to ensure highdefinition visuals, high frame rates, and low latency in stereoscopic rendering. This challenge is particularly pronounced for lower-tier and standalone VR devices with limited processing power. To accelerate rendering, existing supersampling and image reprojection techniques have shown significant potential, yet to date, no previous work has explored their combination to minimize stereo rendering overhead. In this paper, we introduce a lightweight supersampling framework that integrates image projection with spatio-temporal supersampling to accelerate stereo rendering. Our approach effectively leverages the temporal and spatial redundancies inherent in stereo videos, enabling rapid image generation for unshaded viewpoints and providing resolution-enhanced and anti-aliased images for binocular viewpoints. We first blend a rendered low-resolution (LR) frame with accumulated temporal samples to construct an high-resolution (HR) frame. This HR frame is then reprojected to the other viewpoint to directly synthesize a new image. To address disocclusions in reprojected images, we utilize accumulated history data and low-pass filtering for filling, ensuring high-quality results with minimal delay. Extensive evaluations on both the PC and the standalone device confirm that our framework requires short runtime to generate high-fidelity images, making it an effective solution for stereo rendering across various VR platforms.

{"title":"Accelerating Stereo Rendering via Image Reprojection and Spatio-Temporal Supersampling.","authors":"Sipeng Yang, Junhao Zhuge, Jiayu Ji, Qingchuan Zhu, Xiaogang JinZ","doi":"10.1109/TVCG.2025.3549557","DOIUrl":"10.1109/TVCG.2025.3549557","url":null,"abstract":"<p><p>Achieving immersive virtual reality (VR) experiences typically requires extensive computational resources to ensure highdefinition visuals, high frame rates, and low latency in stereoscopic rendering. This challenge is particularly pronounced for lower-tier and standalone VR devices with limited processing power. To accelerate rendering, existing supersampling and image reprojection techniques have shown significant potential, yet to date, no previous work has explored their combination to minimize stereo rendering overhead. In this paper, we introduce a lightweight supersampling framework that integrates image projection with spatio-temporal supersampling to accelerate stereo rendering. Our approach effectively leverages the temporal and spatial redundancies inherent in stereo videos, enabling rapid image generation for unshaded viewpoints and providing resolution-enhanced and anti-aliased images for binocular viewpoints. We first blend a rendered low-resolution (LR) frame with accumulated temporal samples to construct an high-resolution (HR) frame. This HR frame is then reprojected to the other viewpoint to directly synthesize a new image. To address disocclusions in reprojected images, we utilize accumulated history data and low-pass filtering for filling, ensuring high-quality results with minimal delay. Extensive evaluations on both the PC and the standalone device confirm that our framework requires short runtime to generate high-fidelity images, making it an effective solution for stereo rendering across various VR platforms.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ResponsiveView: Enhancing 3D Artifact Viewing Experience in VR Museums. ResponsiveView:增强 VR 博物馆中的 3D 文物观看体验。
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549872
Xueqi Wang, Yue Li, Boge Ling, Han-Mei Chen, Hai-Ning Liang

The viewing experience of 3D artifacts in Virtual Reality (VR) museums is constrained and affected by various factors, such as pedestal height, viewing distance, and object scale. User experiences regarding these factors can vary subjectively, making it difficult to identify a universal optimal solution. In this paper, we collect empirical data on user-determined parameters for the optimal viewing experience in VR museums. By modeling users' viewing behaviors in VR museums, we derive predictive functions that configure the pedestal height, calculate the optimal viewing distance, and adjust the appropriate handheld scale for the optimal viewing experience. This led to our novel 3D responsive design, ResponsiveView. Similar to the responsive web design that automatically adjusts for different screen sizes, ResponsiveView automatically adjusts the parameters in the VR environment to facilitate users' viewing experience. The design has been validated with two popular inputs available in current commercial VR devices: controller-based interactions and hand tracking, demonstrating enhanced viewing experience in VR museums.

{"title":"ResponsiveView: Enhancing 3D Artifact Viewing Experience in VR Museums.","authors":"Xueqi Wang, Yue Li, Boge Ling, Han-Mei Chen, Hai-Ning Liang","doi":"10.1109/TVCG.2025.3549872","DOIUrl":"10.1109/TVCG.2025.3549872","url":null,"abstract":"<p><p>The viewing experience of 3D artifacts in Virtual Reality (VR) museums is constrained and affected by various factors, such as pedestal height, viewing distance, and object scale. User experiences regarding these factors can vary subjectively, making it difficult to identify a universal optimal solution. In this paper, we collect empirical data on user-determined parameters for the optimal viewing experience in VR museums. By modeling users' viewing behaviors in VR museums, we derive predictive functions that configure the pedestal height, calculate the optimal viewing distance, and adjust the appropriate handheld scale for the optimal viewing experience. This led to our novel 3D responsive design, ResponsiveView. Similar to the responsive web design that automatically adjusts for different screen sizes, ResponsiveView automatically adjusts the parameters in the VR environment to facilitate users' viewing experience. The design has been validated with two popular inputs available in current commercial VR devices: controller-based interactions and hand tracking, demonstrating enhanced viewing experience in VR museums.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Setting the Stage: Using Virtual Reality to Assess the Effects of Music Performance Anxiety in Pianists.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549843
Nicalia ThompSon, Xueni Pan, Maria Herrojo Ruiz

Music Performance Anxiety (MPA) is highly prevalent among musicians and often debilitating, associated with changes in cognitive, emotional, behavioral, and physiological responses to performance situations. Efforts have been made to create simulated performance environments in conservatoires and Virtual Reality (VR) to assess their effectiveness in managing MPA. Despite these advances, results have been mixed, underscoring the need for controlled experimental designs and joint analyses of performance, physiology, and subjective ratings in these settings. Furthermore, the broader application of simulated performance environments for at-home use and laboratory studies on MPA remains limited. We designed VR scenarios to induce MPA in pianists and embedded them within a controlled within-subject experimental design to systematically assess their effects on performance, physiology, and anxiety ratings. Twenty pianists completed a performance task under two conditions: a public 'Audition' and a private 'Studio' rehearsal. Participants experienced VR pre-performance settings before transitioning to live piano performances in the real world. We measured subjective anxiety, performance (MIDI data), and heart rate variability (HRV). Compared to the Studio condition, pianists in the Audition condition reported higher somatic anxiety ratings and demonstrated an increase in performance accuracy over time, with a reduced error rate. Additionally, their performances were faster and featured increased note intensity. No concurrent changes in HRV were observed. These results validate the potential of VR to induce MPA, enhancing pitch accuracy and invigorating tempo and dynamics. We discuss the strengths and limitations of this approach to develop VR-based interventions to mitigate the debilitating effects of MPA.

{"title":"Setting the Stage: Using Virtual Reality to Assess the Effects of Music Performance Anxiety in Pianists.","authors":"Nicalia ThompSon, Xueni Pan, Maria Herrojo Ruiz","doi":"10.1109/TVCG.2025.3549843","DOIUrl":"10.1109/TVCG.2025.3549843","url":null,"abstract":"<p><p>Music Performance Anxiety (MPA) is highly prevalent among musicians and often debilitating, associated with changes in cognitive, emotional, behavioral, and physiological responses to performance situations. Efforts have been made to create simulated performance environments in conservatoires and Virtual Reality (VR) to assess their effectiveness in managing MPA. Despite these advances, results have been mixed, underscoring the need for controlled experimental designs and joint analyses of performance, physiology, and subjective ratings in these settings. Furthermore, the broader application of simulated performance environments for at-home use and laboratory studies on MPA remains limited. We designed VR scenarios to induce MPA in pianists and embedded them within a controlled within-subject experimental design to systematically assess their effects on performance, physiology, and anxiety ratings. Twenty pianists completed a performance task under two conditions: a public 'Audition' and a private 'Studio' rehearsal. Participants experienced VR pre-performance settings before transitioning to live piano performances in the real world. We measured subjective anxiety, performance (MIDI data), and heart rate variability (HRV). Compared to the Studio condition, pianists in the Audition condition reported higher somatic anxiety ratings and demonstrated an increase in performance accuracy over time, with a reduced error rate. Additionally, their performances were faster and featured increased note intensity. No concurrent changes in HRV were observed. These results validate the potential of VR to induce MPA, enhancing pitch accuracy and invigorating tempo and dynamics. We discuss the strengths and limitations of this approach to develop VR-based interventions to mitigate the debilitating effects of MPA.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shiftly: A Novel Origami Shape-Shifting Haptic Device for Virtual Reality.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549548
Tobias Batik, Hugo Brument, Khrystyna Vasylevska, Hannes Kaufmann

We present a novel shape-shifting haptic device, Shiftly, which renders plausible haptic feedback when touching virtual objects in Virtual Reality (VR). By changing its shape, different geometries of virtual objects can be approximated to provide haptic feedback for the user's hand. The device employs only three actuators and three curved origamis that can be programmatically folded and unfolded to create a variety of touch surfaces ranging from flat to curved. In this paper, we present the design of Shiftly, including its kinematic model and integration into VR setups for haptics. We also assessed Shiftly using two user studies. The first study evaluated how well Shiftly can approximate different shapes without visual representation. The second study investigated the realism of the haptic feedback with Shiftly for a user when touching a rendered virtual object. The results showed that our device can provide realistic haptic feedback for flat surfaces, convex shapes of different curvatures, and edge-shaped geometries. Shiftly can less realistically render concave surfaces and objects with small details.

{"title":"Shiftly: A Novel Origami Shape-Shifting Haptic Device for Virtual Reality.","authors":"Tobias Batik, Hugo Brument, Khrystyna Vasylevska, Hannes Kaufmann","doi":"10.1109/TVCG.2025.3549548","DOIUrl":"10.1109/TVCG.2025.3549548","url":null,"abstract":"<p><p>We present a novel shape-shifting haptic device, Shiftly, which renders plausible haptic feedback when touching virtual objects in Virtual Reality (VR). By changing its shape, different geometries of virtual objects can be approximated to provide haptic feedback for the user's hand. The device employs only three actuators and three curved origamis that can be programmatically folded and unfolded to create a variety of touch surfaces ranging from flat to curved. In this paper, we present the design of Shiftly, including its kinematic model and integration into VR setups for haptics. We also assessed Shiftly using two user studies. The first study evaluated how well Shiftly can approximate different shapes without visual representation. The second study investigated the realism of the haptic feedback with Shiftly for a user when touching a rendered virtual object. The results showed that our device can provide realistic haptic feedback for flat surfaces, convex shapes of different curvatures, and edge-shaped geometries. Shiftly can less realistically render concave surfaces and objects with small details.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1