Pub Date : 2025-03-11DOI: 10.1109/TVCG.2025.3549849
Anna Martin Coesel, Beatrice Biancardi, Mukesh Barange, Stephanie Buisine
The Proteus effect describes how users of virtual environments adjust their attitudes to match stereotypes associated with their avatar's appearance. While numerous studies have demonstrated this phenomenon's reliability, its underlying processes remain poorly understood. This work investigates deindividuation's hypothesized but unproven role within the Proteus effect. Deindividuated individuals tend to follow situational norms rather than personal ones. Therefore, together with high embodiment and identification processes, deindividuation may lead to a stronger Proteus effect. We present two experimental studies. First, we demonstrated the emergence of the Proteus effect in a real-world academic context: engineering students got better scores in a statistical task when embodying Albert Einstein's avatar compared to a control one. In the second study, we tested the role of deindividuation by manipulating participants' exposure to different identity cues during the task. While we could not find a significant effect of deindividuation on the participants' performance, our results highlight an unexpected pattern, with embodiment as a negative predictor and identification as a positive predictor of performance. These results open avenues for further research on the processes involved in the Proteus effect, particularly those focused on the relation between the avatar and the nature of the task to be performed. All supplemental materials are available at https://osf.io/au3wk/.
{"title":"The Hidden Face of the Proteus Effect: Deindividuation, Embodiment and Identification.","authors":"Anna Martin Coesel, Beatrice Biancardi, Mukesh Barange, Stephanie Buisine","doi":"10.1109/TVCG.2025.3549849","DOIUrl":"10.1109/TVCG.2025.3549849","url":null,"abstract":"<p><p>The Proteus effect describes how users of virtual environments adjust their attitudes to match stereotypes associated with their avatar's appearance. While numerous studies have demonstrated this phenomenon's reliability, its underlying processes remain poorly understood. This work investigates deindividuation's hypothesized but unproven role within the Proteus effect. Deindividuated individuals tend to follow situational norms rather than personal ones. Therefore, together with high embodiment and identification processes, deindividuation may lead to a stronger Proteus effect. We present two experimental studies. First, we demonstrated the emergence of the Proteus effect in a real-world academic context: engineering students got better scores in a statistical task when embodying Albert Einstein's avatar compared to a control one. In the second study, we tested the role of deindividuation by manipulating participants' exposure to different identity cues during the task. While we could not find a significant effect of deindividuation on the participants' performance, our results highlight an unexpected pattern, with embodiment as a negative predictor and identification as a positive predictor of performance. These results open avenues for further research on the processes involved in the Proteus effect, particularly those focused on the relation between the avatar and the nature of the task to be performed. All supplemental materials are available at https://osf.io/au3wk/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-11DOI: 10.1109/TVCG.2025.3549553
Yang Tian, Zhao Su, Tianren Luo, Teng Han, Shengdong Zhao, Youpeng Zhang, Yixin Wang, BoYu Gao, Dangxiao Wang
Touch interaction is one of the fundamental interaction paradigms in XR, as users have become very familiar with touch interactions on physical touchscreens. However, users typically need to perform extensive arm movements for engaging with XR user interfaces much larger than mobile device touchscreens. We propose the SummonBrush technique to facilitate easy access to hidden windows while interacting with large XR user interfaces, requiring minimal arm movements. The SummonBrush technique adds a virtual brush to the index fingertip of a user's hand. Upon making contact with a virtual user interface, the brush bends and diverges and ink starts to diffuse in it. The more the brush bends and diverges, the more the ink diffuses. The user can summon hidden windows or background applications in situ, which is achieved by firstly pressing the brush against the user interface to make ink fully fill the brush and then perform swipe gestures. Also, the user can press the brush against the thumbtails of background applications in situ to quickly cycle them through. Ecological studies showed that SummonBrush significantly reduced the arm movement time by 39% and 34% in summoning hidden windows and activating/closing background applications, respectively, leading to a significant decrease in reported physical demand.
{"title":"SummonBrush: Enhancing Touch Interaction on Large XR User Interfaces by Augmenting Users' Hands with Virtual Brushes.","authors":"Yang Tian, Zhao Su, Tianren Luo, Teng Han, Shengdong Zhao, Youpeng Zhang, Yixin Wang, BoYu Gao, Dangxiao Wang","doi":"10.1109/TVCG.2025.3549553","DOIUrl":"10.1109/TVCG.2025.3549553","url":null,"abstract":"<p><p>Touch interaction is one of the fundamental interaction paradigms in XR, as users have become very familiar with touch interactions on physical touchscreens. However, users typically need to perform extensive arm movements for engaging with XR user interfaces much larger than mobile device touchscreens. We propose the SummonBrush technique to facilitate easy access to hidden windows while interacting with large XR user interfaces, requiring minimal arm movements. The SummonBrush technique adds a virtual brush to the index fingertip of a user's hand. Upon making contact with a virtual user interface, the brush bends and diverges and ink starts to diffuse in it. The more the brush bends and diverges, the more the ink diffuses. The user can summon hidden windows or background applications in situ, which is achieved by firstly pressing the brush against the user interface to make ink fully fill the brush and then perform swipe gestures. Also, the user can press the brush against the thumbtails of background applications in situ to quickly cycle them through. Ecological studies showed that SummonBrush significantly reduced the arm movement time by 39% and 34% in summoning hidden windows and activating/closing background applications, respectively, leading to a significant decrease in reported physical demand.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549535
Jong-In Lee, Wolfgang Stuerzlinger
Navigating multiscale virtual environments necessitates an interaction method to travel across different levels of scale (LoS). Prior research has studied various techniques that enable users to seamlessly adjust their scale to navigate between different LoS based on specific user contexts. We introduce a scroll-based scale control method optimized for exocentric navigation, targeted at scenarios where speed and accuracy in continuous scaling are crucial. We pinpoint the challenges of scale control in settings with multiple LoS and evaluate how distinct designs of scaling techniques influence navigation performance and usability. Through a user study, we investigated two pivotal elements of a scaling technique: the input method and the scaling center. Our findings indicate that our scroll-based input method significantly reduces task completion time and error rate and enhances efficiency compared to the most frequently used bi-manual method. Moreover, we found that the choice of scaling center affects the ease of use of the scaling method, especially when paired with specific input methods.
{"title":"Scaling Techniques for Exocentric Navigation Interfaces in Multiscale Virtual Environments.","authors":"Jong-In Lee, Wolfgang Stuerzlinger","doi":"10.1109/TVCG.2025.3549535","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549535","url":null,"abstract":"<p><p>Navigating multiscale virtual environments necessitates an interaction method to travel across different levels of scale (LoS). Prior research has studied various techniques that enable users to seamlessly adjust their scale to navigate between different LoS based on specific user contexts. We introduce a scroll-based scale control method optimized for exocentric navigation, targeted at scenarios where speed and accuracy in continuous scaling are crucial. We pinpoint the challenges of scale control in settings with multiple LoS and evaluate how distinct designs of scaling techniques influence navigation performance and usability. Through a user study, we investigated two pivotal elements of a scaling technique: the input method and the scaling center. Our findings indicate that our scroll-based input method significantly reduces task completion time and error rate and enhances efficiency compared to the most frequently used bi-manual method. Moreover, we found that the choice of scaling center affects the ease of use of the scaling method, especially when paired with specific input methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549570
Sahar Niknam, Saravanakumar Duraisamy, Jean Botev, Luis A Leiva
Achieving a high level of immersion and adaptation in virtual reality (VR) requires precise measurement and representation of user state. While extrinsic physical characteristics such as locomotion and pose can be accurately tracked in real-time, reliably capturing mental states is more challenging. Quantitative psychology allows considering more intrinsic features like emotion, attention, or cognitive load. Time perception, in particular, is strongly tied to users' mental states, including stress, focus, and boredom. However, research on objectively measuring the pace at which we perceive the passage of time is scarce. In this work, we investigate the potential of electroencephalography (EEG) as an objective measure of time perception in VR, exploring neural correlates with oscillatory responses and time-frequency analysis. To this end, we implemented a variety of time perception modulators in VR, collected EEG recordings, and labeled them with overestimation, correct estimation, and underestimation time perception states. We found clear EEG spectral signatures for these three states, that are persistent across individuals, modulators, and modulation duration. These signatures can be integrated and applied to monitor and actively influence time perception in VR, allowing the virtual environment to be purposefully adapted to the individual to increase immersion further and improve user experience. A free copy of this paper and all supplemental materials are available at https://vrarlab.uni.lu/pub/brain-signatures.
{"title":"Brain Signatures of Time Perception in Virtual Reality.","authors":"Sahar Niknam, Saravanakumar Duraisamy, Jean Botev, Luis A Leiva","doi":"10.1109/TVCG.2025.3549570","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549570","url":null,"abstract":"<p><p>Achieving a high level of immersion and adaptation in virtual reality (VR) requires precise measurement and representation of user state. While extrinsic physical characteristics such as locomotion and pose can be accurately tracked in real-time, reliably capturing mental states is more challenging. Quantitative psychology allows considering more intrinsic features like emotion, attention, or cognitive load. Time perception, in particular, is strongly tied to users' mental states, including stress, focus, and boredom. However, research on objectively measuring the pace at which we perceive the passage of time is scarce. In this work, we investigate the potential of electroencephalography (EEG) as an objective measure of time perception in VR, exploring neural correlates with oscillatory responses and time-frequency analysis. To this end, we implemented a variety of time perception modulators in VR, collected EEG recordings, and labeled them with overestimation, correct estimation, and underestimation time perception states. We found clear EEG spectral signatures for these three states, that are persistent across individuals, modulators, and modulation duration. These signatures can be integrated and applied to monitor and actively influence time perception in VR, allowing the virtual environment to be purposefully adapted to the individual to increase immersion further and improve user experience. A free copy of this paper and all supplemental materials are available at https://vrarlab.uni.lu/pub/brain-signatures.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549887
Peter Kullmann, Theresa Schell, Timo Menzel, Mario Botsch, Marc Erich Latoschik
Facial expressions are crucial for many eXtended Reality (XR) use cases, from mirrored self exposures to social XR, where users interact via their avatars as digital alter egos. However, current XR devices differ in sensor coverage of the face region. Hence, a faithful reconstruction of facial expressions either has to exclude these areas or synthesize missing animation data with model-based approaches, potentially leading to perceivable mismatches between executed and perceived expression. This paper investigates potential effects of the coverage of facial animations (none, partial, or whole) on important factors of self-perception. We exposed 83 participants to their mirrored personalized avatar. They were shown their mirrored avatar face with upper and lower face animation, upper face animation only, lower face animation only, or no face animation. Whole animations were rated higher in virtual embodiment and slightly lower in uncanniness. Missing animations did not differ from partial ones in terms of virtual embodiment. Contrasts showed significantly lower humanness, lower eeriness, and lower attractiveness for the partial conditions. For questions related to self-identification, effects were mixed. We discuss participants' shift in body part attention across conditions. Qualitative results show participants perceived their virtual representation as fascinating yet uncanny.
{"title":"Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness.","authors":"Peter Kullmann, Theresa Schell, Timo Menzel, Mario Botsch, Marc Erich Latoschik","doi":"10.1109/TVCG.2025.3549887","DOIUrl":"10.1109/TVCG.2025.3549887","url":null,"abstract":"<p><p>Facial expressions are crucial for many eXtended Reality (XR) use cases, from mirrored self exposures to social XR, where users interact via their avatars as digital alter egos. However, current XR devices differ in sensor coverage of the face region. Hence, a faithful reconstruction of facial expressions either has to exclude these areas or synthesize missing animation data with model-based approaches, potentially leading to perceivable mismatches between executed and perceived expression. This paper investigates potential effects of the coverage of facial animations (none, partial, or whole) on important factors of self-perception. We exposed 83 participants to their mirrored personalized avatar. They were shown their mirrored avatar face with upper and lower face animation, upper face animation only, lower face animation only, or no face animation. Whole animations were rated higher in virtual embodiment and slightly lower in uncanniness. Missing animations did not differ from partial ones in terms of virtual embodiment. Contrasts showed significantly lower humanness, lower eeriness, and lower attractiveness for the partial conditions. For questions related to self-identification, effects were mixed. We discuss participants' shift in body part attention across conditions. Qualitative results show participants perceived their virtual representation as fascinating yet uncanny.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549882
Constantin Kleinbeck, Hannah Schieber, Klaus Engel, Ralf Gutjahr, Daniel Roth
In medical image visualization, path tracing of volumetric medical data like computed tomography (CT) scans produces lifelike three-dimensional visualizations. Immersive virtual reality (VR) displays can further enhance the understanding of complex anatomies. Going beyond the diagnostic quality of traditional 2D slices, they enable interactive 3D evaluation of anatomies, supporting medical education and planning. Rendering high-quality visualizations in real-time, however, is computationally intensive and impractical for compute-constrained devices like mobile headsets. We propose a novel approach utilizing Gaussian Splatting (GS) to create an efficient but static intermediate representation of CT scans. We introduce a layered GS representation, incrementally including different anatomical structures while minimizing overlap and extending the GS training to remove inactive Gaussians. We further compress the created model with clustering across layers. Our approach achieves interactive frame rates while preserving anatomical structures, with quality adjustable to the target hardware. Compared to standard GS, our representation retains some of the explorative qualities initially enabled by immersive path tracing. Selective activation and clipping of layers are possible at rendering time, adding a degree of interactivity to otherwise static GS models. This could enable scenarios where high computational demands would otherwise prohibit using path-traced medical volumes.
{"title":"Multi-Layer Gaussian Splatting for Immersive Anatomy Visualization.","authors":"Constantin Kleinbeck, Hannah Schieber, Klaus Engel, Ralf Gutjahr, Daniel Roth","doi":"10.1109/TVCG.2025.3549882","DOIUrl":"10.1109/TVCG.2025.3549882","url":null,"abstract":"<p><p>In medical image visualization, path tracing of volumetric medical data like computed tomography (CT) scans produces lifelike three-dimensional visualizations. Immersive virtual reality (VR) displays can further enhance the understanding of complex anatomies. Going beyond the diagnostic quality of traditional 2D slices, they enable interactive 3D evaluation of anatomies, supporting medical education and planning. Rendering high-quality visualizations in real-time, however, is computationally intensive and impractical for compute-constrained devices like mobile headsets. We propose a novel approach utilizing Gaussian Splatting (GS) to create an efficient but static intermediate representation of CT scans. We introduce a layered GS representation, incrementally including different anatomical structures while minimizing overlap and extending the GS training to remove inactive Gaussians. We further compress the created model with clustering across layers. Our approach achieves interactive frame rates while preserving anatomical structures, with quality adjustable to the target hardware. Compared to standard GS, our representation retains some of the explorative qualities initially enabled by immersive path tracing. Selective activation and clipping of layers are possible at rendering time, adding a degree of interactivity to otherwise static GS models. This could enable scenarios where high computational demands would otherwise prohibit using path-traced medical volumes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549557
Sipeng Yang, Junhao Zhuge, Jiayu Ji, Qingchuan Zhu, Xiaogang JinZ
Achieving immersive virtual reality (VR) experiences typically requires extensive computational resources to ensure highdefinition visuals, high frame rates, and low latency in stereoscopic rendering. This challenge is particularly pronounced for lower-tier and standalone VR devices with limited processing power. To accelerate rendering, existing supersampling and image reprojection techniques have shown significant potential, yet to date, no previous work has explored their combination to minimize stereo rendering overhead. In this paper, we introduce a lightweight supersampling framework that integrates image projection with spatio-temporal supersampling to accelerate stereo rendering. Our approach effectively leverages the temporal and spatial redundancies inherent in stereo videos, enabling rapid image generation for unshaded viewpoints and providing resolution-enhanced and anti-aliased images for binocular viewpoints. We first blend a rendered low-resolution (LR) frame with accumulated temporal samples to construct an high-resolution (HR) frame. This HR frame is then reprojected to the other viewpoint to directly synthesize a new image. To address disocclusions in reprojected images, we utilize accumulated history data and low-pass filtering for filling, ensuring high-quality results with minimal delay. Extensive evaluations on both the PC and the standalone device confirm that our framework requires short runtime to generate high-fidelity images, making it an effective solution for stereo rendering across various VR platforms.
{"title":"Accelerating Stereo Rendering via Image Reprojection and Spatio-Temporal Supersampling.","authors":"Sipeng Yang, Junhao Zhuge, Jiayu Ji, Qingchuan Zhu, Xiaogang JinZ","doi":"10.1109/TVCG.2025.3549557","DOIUrl":"10.1109/TVCG.2025.3549557","url":null,"abstract":"<p><p>Achieving immersive virtual reality (VR) experiences typically requires extensive computational resources to ensure highdefinition visuals, high frame rates, and low latency in stereoscopic rendering. This challenge is particularly pronounced for lower-tier and standalone VR devices with limited processing power. To accelerate rendering, existing supersampling and image reprojection techniques have shown significant potential, yet to date, no previous work has explored their combination to minimize stereo rendering overhead. In this paper, we introduce a lightweight supersampling framework that integrates image projection with spatio-temporal supersampling to accelerate stereo rendering. Our approach effectively leverages the temporal and spatial redundancies inherent in stereo videos, enabling rapid image generation for unshaded viewpoints and providing resolution-enhanced and anti-aliased images for binocular viewpoints. We first blend a rendered low-resolution (LR) frame with accumulated temporal samples to construct an high-resolution (HR) frame. This HR frame is then reprojected to the other viewpoint to directly synthesize a new image. To address disocclusions in reprojected images, we utilize accumulated history data and low-pass filtering for filling, ensuring high-quality results with minimal delay. Extensive evaluations on both the PC and the standalone device confirm that our framework requires short runtime to generate high-fidelity images, making it an effective solution for stereo rendering across various VR platforms.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The viewing experience of 3D artifacts in Virtual Reality (VR) museums is constrained and affected by various factors, such as pedestal height, viewing distance, and object scale. User experiences regarding these factors can vary subjectively, making it difficult to identify a universal optimal solution. In this paper, we collect empirical data on user-determined parameters for the optimal viewing experience in VR museums. By modeling users' viewing behaviors in VR museums, we derive predictive functions that configure the pedestal height, calculate the optimal viewing distance, and adjust the appropriate handheld scale for the optimal viewing experience. This led to our novel 3D responsive design, ResponsiveView. Similar to the responsive web design that automatically adjusts for different screen sizes, ResponsiveView automatically adjusts the parameters in the VR environment to facilitate users' viewing experience. The design has been validated with two popular inputs available in current commercial VR devices: controller-based interactions and hand tracking, demonstrating enhanced viewing experience in VR museums.
{"title":"ResponsiveView: Enhancing 3D Artifact Viewing Experience in VR Museums.","authors":"Xueqi Wang, Yue Li, Boge Ling, Han-Mei Chen, Hai-Ning Liang","doi":"10.1109/TVCG.2025.3549872","DOIUrl":"10.1109/TVCG.2025.3549872","url":null,"abstract":"<p><p>The viewing experience of 3D artifacts in Virtual Reality (VR) museums is constrained and affected by various factors, such as pedestal height, viewing distance, and object scale. User experiences regarding these factors can vary subjectively, making it difficult to identify a universal optimal solution. In this paper, we collect empirical data on user-determined parameters for the optimal viewing experience in VR museums. By modeling users' viewing behaviors in VR museums, we derive predictive functions that configure the pedestal height, calculate the optimal viewing distance, and adjust the appropriate handheld scale for the optimal viewing experience. This led to our novel 3D responsive design, ResponsiveView. Similar to the responsive web design that automatically adjusts for different screen sizes, ResponsiveView automatically adjusts the parameters in the VR environment to facilitate users' viewing experience. The design has been validated with two popular inputs available in current commercial VR devices: controller-based interactions and hand tracking, demonstrating enhanced viewing experience in VR museums.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549843
Nicalia ThompSon, Xueni Pan, Maria Herrojo Ruiz
Music Performance Anxiety (MPA) is highly prevalent among musicians and often debilitating, associated with changes in cognitive, emotional, behavioral, and physiological responses to performance situations. Efforts have been made to create simulated performance environments in conservatoires and Virtual Reality (VR) to assess their effectiveness in managing MPA. Despite these advances, results have been mixed, underscoring the need for controlled experimental designs and joint analyses of performance, physiology, and subjective ratings in these settings. Furthermore, the broader application of simulated performance environments for at-home use and laboratory studies on MPA remains limited. We designed VR scenarios to induce MPA in pianists and embedded them within a controlled within-subject experimental design to systematically assess their effects on performance, physiology, and anxiety ratings. Twenty pianists completed a performance task under two conditions: a public 'Audition' and a private 'Studio' rehearsal. Participants experienced VR pre-performance settings before transitioning to live piano performances in the real world. We measured subjective anxiety, performance (MIDI data), and heart rate variability (HRV). Compared to the Studio condition, pianists in the Audition condition reported higher somatic anxiety ratings and demonstrated an increase in performance accuracy over time, with a reduced error rate. Additionally, their performances were faster and featured increased note intensity. No concurrent changes in HRV were observed. These results validate the potential of VR to induce MPA, enhancing pitch accuracy and invigorating tempo and dynamics. We discuss the strengths and limitations of this approach to develop VR-based interventions to mitigate the debilitating effects of MPA.
{"title":"Setting the Stage: Using Virtual Reality to Assess the Effects of Music Performance Anxiety in Pianists.","authors":"Nicalia ThompSon, Xueni Pan, Maria Herrojo Ruiz","doi":"10.1109/TVCG.2025.3549843","DOIUrl":"10.1109/TVCG.2025.3549843","url":null,"abstract":"<p><p>Music Performance Anxiety (MPA) is highly prevalent among musicians and often debilitating, associated with changes in cognitive, emotional, behavioral, and physiological responses to performance situations. Efforts have been made to create simulated performance environments in conservatoires and Virtual Reality (VR) to assess their effectiveness in managing MPA. Despite these advances, results have been mixed, underscoring the need for controlled experimental designs and joint analyses of performance, physiology, and subjective ratings in these settings. Furthermore, the broader application of simulated performance environments for at-home use and laboratory studies on MPA remains limited. We designed VR scenarios to induce MPA in pianists and embedded them within a controlled within-subject experimental design to systematically assess their effects on performance, physiology, and anxiety ratings. Twenty pianists completed a performance task under two conditions: a public 'Audition' and a private 'Studio' rehearsal. Participants experienced VR pre-performance settings before transitioning to live piano performances in the real world. We measured subjective anxiety, performance (MIDI data), and heart rate variability (HRV). Compared to the Studio condition, pianists in the Audition condition reported higher somatic anxiety ratings and demonstrated an increase in performance accuracy over time, with a reduced error rate. Additionally, their performances were faster and featured increased note intensity. No concurrent changes in HRV were observed. These results validate the potential of VR to induce MPA, enhancing pitch accuracy and invigorating tempo and dynamics. We discuss the strengths and limitations of this approach to develop VR-based interventions to mitigate the debilitating effects of MPA.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549548
Tobias Batik, Hugo Brument, Khrystyna Vasylevska, Hannes Kaufmann
We present a novel shape-shifting haptic device, Shiftly, which renders plausible haptic feedback when touching virtual objects in Virtual Reality (VR). By changing its shape, different geometries of virtual objects can be approximated to provide haptic feedback for the user's hand. The device employs only three actuators and three curved origamis that can be programmatically folded and unfolded to create a variety of touch surfaces ranging from flat to curved. In this paper, we present the design of Shiftly, including its kinematic model and integration into VR setups for haptics. We also assessed Shiftly using two user studies. The first study evaluated how well Shiftly can approximate different shapes without visual representation. The second study investigated the realism of the haptic feedback with Shiftly for a user when touching a rendered virtual object. The results showed that our device can provide realistic haptic feedback for flat surfaces, convex shapes of different curvatures, and edge-shaped geometries. Shiftly can less realistically render concave surfaces and objects with small details.
{"title":"Shiftly: A Novel Origami Shape-Shifting Haptic Device for Virtual Reality.","authors":"Tobias Batik, Hugo Brument, Khrystyna Vasylevska, Hannes Kaufmann","doi":"10.1109/TVCG.2025.3549548","DOIUrl":"10.1109/TVCG.2025.3549548","url":null,"abstract":"<p><p>We present a novel shape-shifting haptic device, Shiftly, which renders plausible haptic feedback when touching virtual objects in Virtual Reality (VR). By changing its shape, different geometries of virtual objects can be approximated to provide haptic feedback for the user's hand. The device employs only three actuators and three curved origamis that can be programmatically folded and unfolded to create a variety of touch surfaces ranging from flat to curved. In this paper, we present the design of Shiftly, including its kinematic model and integration into VR setups for haptics. We also assessed Shiftly using two user studies. The first study evaluated how well Shiftly can approximate different shapes without visual representation. The second study investigated the realism of the haptic feedback with Shiftly for a user when touching a rendered virtual object. The results showed that our device can provide realistic haptic feedback for flat surfaces, convex shapes of different curvatures, and edge-shaped geometries. Shiftly can less realistically render concave surfaces and objects with small details.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}