Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549897
Joomi Lee, Chen Chen, Aryabrata Basu
Virtual reality (VR) is increasingly recognized as a powerful educational platform, but the novelty effect-where users experience heightened engagement during initial interactions with new technology-can interfere with learning outcomes. This study investigates how the novelty effect influences learning using a three-wave longitudinal design, tracking changes in information recall and exploratory behavior over three weeks. Our findings reveal that while initial novelty impedes learning, learners' ability to encode educational content improves as they become more familiar with the virtual environment. Additionally, sustained exploratory behavior positively impacts learning over time, reinforcing the importance of active engagement in VR-based education. This study enhances the understanding of VR's long-term educational impact and provides guidance for improving learning effectiveness in immersive learning environments.
{"title":"From Novelty to Knowledge: A Longitudinal Investigation of the Novelty Effect on Learning Outcomes in Virtual Reality.","authors":"Joomi Lee, Chen Chen, Aryabrata Basu","doi":"10.1109/TVCG.2025.3549897","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549897","url":null,"abstract":"<p><p>Virtual reality (VR) is increasingly recognized as a powerful educational platform, but the novelty effect-where users experience heightened engagement during initial interactions with new technology-can interfere with learning outcomes. This study investigates how the novelty effect influences learning using a three-wave longitudinal design, tracking changes in information recall and exploratory behavior over three weeks. Our findings reveal that while initial novelty impedes learning, learners' ability to encode educational content improves as they become more familiar with the virtual environment. Additionally, sustained exploratory behavior positively impacts learning over time, reinforcing the importance of active engagement in VR-based education. This study enhances the understanding of VR's long-term educational impact and provides guidance for improving learning effectiveness in immersive learning environments.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549896
Mario Lorenz, Maximilian Kaluschke, Annegret Melzer, Nina Pillen, Magdalena Sanrow, Andrea Hoffmann, Dennis Schmidt, Andre Dettmann, Angelika C Bullinger, Jerome Perret, Gabriel Zachmann
Virtual reality training simulations to acquire surgical skills are important for increasing patient safety and save valuable resources, e.g., cadavers, supervision and operating room time. However, as surgery is a craft, simulators must not only provide a high degree of visual realism, but especially a realistic haptic behavior. While such simulators exist for surgeries like laparoscopy or arthroscopy, other surgical fields, especially where large forces need to be exerted, like total hip arthroplasty (THA; implantation of a hip joint protheses), lack realistic VR training simulations. In this paper we present for the first time a novel VR training simulation for the five steps of THA (from femur head resection to stem implantation) with realis-tic haptic feedback. To achieve this, a novel haptic hammering device, an upgraded version of the Virtuose 6D haptic device from Haption, novel algorithms for collision detection, haptic rendering, and material removal are introduced. In a study with 17 surgeons of diverse experience levels, we confirmed the realism, usefulness and usability of our novel methods.
{"title":"HIPS - A Surgical Virtual Reality Training System for Total Hip Arthroplasty (THA) with Realistic Force Feedback.","authors":"Mario Lorenz, Maximilian Kaluschke, Annegret Melzer, Nina Pillen, Magdalena Sanrow, Andrea Hoffmann, Dennis Schmidt, Andre Dettmann, Angelika C Bullinger, Jerome Perret, Gabriel Zachmann","doi":"10.1109/TVCG.2025.3549896","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549896","url":null,"abstract":"<p><p>Virtual reality training simulations to acquire surgical skills are important for increasing patient safety and save valuable resources, e.g., cadavers, supervision and operating room time. However, as surgery is a craft, simulators must not only provide a high degree of visual realism, but especially a realistic haptic behavior. While such simulators exist for surgeries like laparoscopy or arthroscopy, other surgical fields, especially where large forces need to be exerted, like total hip arthroplasty (THA; implantation of a hip joint protheses), lack realistic VR training simulations. In this paper we present for the first time a novel VR training simulation for the five steps of THA (from femur head resection to stem implantation) with realis-tic haptic feedback. To achieve this, a novel haptic hammering device, an upgraded version of the Virtuose 6D haptic device from Haption, novel algorithms for collision detection, haptic rendering, and material removal are introduced. In a study with 17 surgeons of diverse experience levels, we confirmed the realism, usefulness and usability of our novel methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549561
Jing Qu, Lingguo Bu, Zhongxin Chen, Yalu Jin, Lei Zhao, Shantong Zhu, Fenghe Guo
The rising incidence of stroke has created a significant global public health challenge. The immersive qualities of virtual reality (VR) technology, along with its distinct advantages, make it a promising tool for stroke rehabilitation. To address this challenge, developing VR-based upper limb rehabilitation systems has become a critical research focus. This study developed and evaluated an innovative ArmVR system that combines VR technology with rehabilitation hardware to improve recovery outcomes for stroke patients. Through comprehensive assessments, including neurofeedback, pressure feedback, and subjective feedback, the results suggest that VR technology has the potential to positively support the recovery of cognitive and motor functions. Different VR environments affect rehabilitation outcomes: forest scenarios aid emotional relaxation, while city scenarios better activate motor centers in stroke patients. The study also identified variations in responses among different user groups. Normal users showed significant changes in cognitive function, whereas stroke patients primarily experienced motor function recovery. These findings suggest that VR-integrated rehabilitation systems possess great potential, and personalized design can further enhance recovery outcomes, meet diverse patient needs, and ultimately improve quality of life.
{"title":"ArmVR: Innovative Design Combining Virtual Reality Technology and Mechanical Equipment in Stroke Rehabilitation Therapy.","authors":"Jing Qu, Lingguo Bu, Zhongxin Chen, Yalu Jin, Lei Zhao, Shantong Zhu, Fenghe Guo","doi":"10.1109/TVCG.2025.3549561","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549561","url":null,"abstract":"<p><p>The rising incidence of stroke has created a significant global public health challenge. The immersive qualities of virtual reality (VR) technology, along with its distinct advantages, make it a promising tool for stroke rehabilitation. To address this challenge, developing VR-based upper limb rehabilitation systems has become a critical research focus. This study developed and evaluated an innovative ArmVR system that combines VR technology with rehabilitation hardware to improve recovery outcomes for stroke patients. Through comprehensive assessments, including neurofeedback, pressure feedback, and subjective feedback, the results suggest that VR technology has the potential to positively support the recovery of cognitive and motor functions. Different VR environments affect rehabilitation outcomes: forest scenarios aid emotional relaxation, while city scenarios better activate motor centers in stroke patients. The study also identified variations in responses among different user groups. Normal users showed significant changes in cognitive function, whereas stroke patients primarily experienced motor function recovery. These findings suggest that VR-integrated rehabilitation systems possess great potential, and personalized design can further enhance recovery outcomes, meet diverse patient needs, and ultimately improve quality of life.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mesh saliency aims to empower artificial intelligence with strong adaptability to highlight regions that naturally attract visual attention. Existing advances primarily emphasize the crucial role of geometric shapes in determining mesh saliency, but it remains challenging to flexibly sense the unique visual appeal brought by the realism of complex texture patterns. To investigate the interaction between geometric shapes and texture features in visual perception, we establish a comprehensive mesh saliency dataset, capturing saliency distributions for identical 3D models under both non-textured and textured conditions. Additionally, we propose a unified saliency prediction model applicable to various mesh types, providing valuable insights for both detailed modeling and realistic rendering applications. This model effectively analyzes the geometric structure of the mesh while seamlessly incorporating texture features into the topological framework, ensuring coherence throughout appearance-enhanced modeling. Through extensive theoretical and empirical validation, our approach not only enhances performance across different mesh types, but also demonstrates the model's scalability and generalizability, particularly through cross-validation of various visual features.
{"title":"Unified Approach to Mesh Saliency: Evaluating Textured and Non-Textured Meshes Through VR and Multifunctional Prediction.","authors":"Kaiwei Zhang, Dandan Zhu, Xiongkuo Min, Guangtao Zhai","doi":"10.1109/TVCG.2025.3549550","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549550","url":null,"abstract":"<p><p>Mesh saliency aims to empower artificial intelligence with strong adaptability to highlight regions that naturally attract visual attention. Existing advances primarily emphasize the crucial role of geometric shapes in determining mesh saliency, but it remains challenging to flexibly sense the unique visual appeal brought by the realism of complex texture patterns. To investigate the interaction between geometric shapes and texture features in visual perception, we establish a comprehensive mesh saliency dataset, capturing saliency distributions for identical 3D models under both non-textured and textured conditions. Additionally, we propose a unified saliency prediction model applicable to various mesh types, providing valuable insights for both detailed modeling and realistic rendering applications. This model effectively analyzes the geometric structure of the mesh while seamlessly incorporating texture features into the topological framework, ensuring coherence throughout appearance-enhanced modeling. Through extensive theoretical and empirical validation, our approach not only enhances performance across different mesh types, but also demonstrates the model's scalability and generalizability, particularly through cross-validation of various visual features.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous works have shown the potential of immersive technologies to make physical activities a more engaging experience. With encountered-type haptic feedback, users can perceive a more realistic sensation for exertion interaction in substitutions reality. Although substitutional reality has utilized physical environments, props, and devices to provide encountered-type haptic feedback, these cannot withstand the fierce force of humans and do not give feedback when users move around simultaneously, such as in combat sports. In this work, we present Hit Around, a substitutional moving robot for immersive and exertion interaction, in which the user can move and punch the virtual opponent and perceive encountered-type haptic feedback anywhere. We gathered insight into immersive exertion interaction from three exhibitions with iterative prototypes, then designed and implemented the hardware system and application. To understand the ability of mobility and weight loading, we conducted two technical evaluations and a laboratory experiment to validate the feasibility. Finally, a field deployment study explored the limitations and challenges of developing immersive exertion interaction with encountered-type haptics.
{"title":"Hit Around: Substitutional Moving Robot for Immersive and Exertion Interaction with Encountered-Type Haptic.","authors":"Yu-Hsiang Weng, Ping-Hsuan Han, Kuan-Ning Chang, Chi-Yu Lin, Chia-Hui Lin, Ho Yin Ng, Chien-Hsing Chou, Wen-Hsin Chiu","doi":"10.1109/TVCG.2025.3549556","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549556","url":null,"abstract":"<p><p>Previous works have shown the potential of immersive technologies to make physical activities a more engaging experience. With encountered-type haptic feedback, users can perceive a more realistic sensation for exertion interaction in substitutions reality. Although substitutional reality has utilized physical environments, props, and devices to provide encountered-type haptic feedback, these cannot withstand the fierce force of humans and do not give feedback when users move around simultaneously, such as in combat sports. In this work, we present Hit Around, a substitutional moving robot for immersive and exertion interaction, in which the user can move and punch the virtual opponent and perceive encountered-type haptic feedback anywhere. We gathered insight into immersive exertion interaction from three exhibitions with iterative prototypes, then designed and implemented the hardware system and application. To understand the ability of mobility and weight loading, we conducted two technical evaluations and a laboratory experiment to validate the feasibility. Finally, a field deployment study explored the limitations and challenges of developing immersive exertion interaction with encountered-type haptics.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549537
Yoonsang Kim, Zainab Aamir, Mithilesh Singh, Saeed Boorboor, Klaus Mueller, Arie E Kaufman
We present Explainable XR, an end-to-end framework for analyzing user behavior in diverse eXtended Reality (XR) environments by leveraging Large Language Models (LLMs) for data interpretation assistance. Existing XR user analytics frameworks face challenges in handling cross-virtuality - AR, VR, MR - transitions, multi-user collaborative application scenarios, and the complexity of multimodal data. Explainable XR addresses these challenges by providing a virtuality-agnostic solution for the collection, analysis, and visualization of immersive sessions. We propose three main components in our framework: (1) A novel user data recording schema, called User Action Descriptor (UAD), that can capture the users' multimodal actions, along with their intents and the contexts; (2) a platform-agnostic XR session recorder, and (3) a visual analytics interface that offers LLM-assisted insights tailored to the analysts' perspectives, facilitating the exploration and analysis of the recorded XR session data. We demonstrate the versatility of Explainable XR by demonstrating five use-case scenarios, in both individual and collaborative XR applications across virtualities. Our technical evaluation and user studies show that Explainable XR provides a highly usable analytics solution for understanding user actions and delivering multifaceted, actionable insights into user behaviors in immersive environments.
{"title":"Explainable XR: Understanding User Behaviors of XR Environments Using LLM-assisted Analytics Framework.","authors":"Yoonsang Kim, Zainab Aamir, Mithilesh Singh, Saeed Boorboor, Klaus Mueller, Arie E Kaufman","doi":"10.1109/TVCG.2025.3549537","DOIUrl":"10.1109/TVCG.2025.3549537","url":null,"abstract":"<p><p>We present Explainable XR, an end-to-end framework for analyzing user behavior in diverse eXtended Reality (XR) environments by leveraging Large Language Models (LLMs) for data interpretation assistance. Existing XR user analytics frameworks face challenges in handling cross-virtuality - AR, VR, MR - transitions, multi-user collaborative application scenarios, and the complexity of multimodal data. Explainable XR addresses these challenges by providing a virtuality-agnostic solution for the collection, analysis, and visualization of immersive sessions. We propose three main components in our framework: (1) A novel user data recording schema, called User Action Descriptor (UAD), that can capture the users' multimodal actions, along with their intents and the contexts; (2) a platform-agnostic XR session recorder, and (3) a visual analytics interface that offers LLM-assisted insights tailored to the analysts' perspectives, facilitating the exploration and analysis of the recorded XR session data. We demonstrate the versatility of Explainable XR by demonstrating five use-case scenarios, in both individual and collaborative XR applications across virtualities. Our technical evaluation and user studies show that Explainable XR provides a highly usable analytics solution for understanding user actions and delivering multifaceted, actionable insights into user behaviors in immersive environments.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549578
Changrui Zhu, Ernst Kruijff, Vijay M Pawar, Simon Julier
Change detection (CD) is critical in everyday tasks. While current algorithmic approaches for CD are improving, they remain imprecise, often requiring human intervention. Cognitive science research focuses on understanding CD mechanisms, especially through change blindness studies. However, these do not address the primary requirement in real-life CD - detecting changes as effectively as possible. Such a requirement is directly relevant to the visual comparison field - studying visualisation techniques to compare data and identify differences or changes effectively. Recent studies have used Virtual Reality (VR) to improve visual comparison by providing an immersive platform where users can interact with 3D data at a real-life scale, enhancing spatial reasoning. We believe VR could also improve CD performance accordingly. Particularly, VR offers stereoscopic depth perception over traditional displays, potentially enhancing the detection of spatial change. In this paper, we develop and analyse three 3D visual comparison techniques for CD in VR: Sliding Window, 3D Slider, and Switch Back. These techniques are evaluated under synthetic but realistic environments and frequently occurring Perceptual Challenges, including different Changed Object Size, Lighting Variation, and Scene Drift conditions. Experimental results reveal significant differences between the techniques in detection time measures and subjective user experience.
{"title":"Evaluating 3D Visual Comparison Techniques for Change Detection in Virtual Reality.","authors":"Changrui Zhu, Ernst Kruijff, Vijay M Pawar, Simon Julier","doi":"10.1109/TVCG.2025.3549578","DOIUrl":"10.1109/TVCG.2025.3549578","url":null,"abstract":"<p><p>Change detection (CD) is critical in everyday tasks. While current algorithmic approaches for CD are improving, they remain imprecise, often requiring human intervention. Cognitive science research focuses on understanding CD mechanisms, especially through change blindness studies. However, these do not address the primary requirement in real-life CD - detecting changes as effectively as possible. Such a requirement is directly relevant to the visual comparison field - studying visualisation techniques to compare data and identify differences or changes effectively. Recent studies have used Virtual Reality (VR) to improve visual comparison by providing an immersive platform where users can interact with 3D data at a real-life scale, enhancing spatial reasoning. We believe VR could also improve CD performance accordingly. Particularly, VR offers stereoscopic depth perception over traditional displays, potentially enhancing the detection of spatial change. In this paper, we develop and analyse three 3D visual comparison techniques for CD in VR: Sliding Window, 3D Slider, and Switch Back. These techniques are evaluated under synthetic but realistic environments and frequently occurring Perceptual Challenges, including different Changed Object Size, Lighting Variation, and Scene Drift conditions. Experimental results reveal significant differences between the techniques in detection time measures and subjective user experience.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549852
Ho Jung Lee, Hyunjeong Kim, In-Kwon Lee
Resetting in redirected walking (RDW) allows users to maintain a continuous, collision-free walking experience in virtual reality (VR), even in a limited physical space. Since frequent resets reduce the user's sense of immersion, extensive research has been conducted to develop resetters that provide optimal reset directions. Various visual reset user interfaces (UIs) have been proposed to help users perform the correct reset direction according to the improved resetter, but their effectiveness has not been sufficiently verified. In addition, expert interviews conducted to identify the problems in the current reset process revealed that users sometimes fail to recognize the visual reset UI in time. Therefore, we propose a novel visual reset UI using Gauge, which is expected to provide users with an effective and high-quality experience. In Study 1, we demonstrate the effectiveness of the Gauge UI by comparing it to existing UIs (Direction, End Point, and Arrow Alignment). Users of various locomotion techniques, including RDW, inevitably need to perform resets, and in this work we propose a novel paradigm: a combined multimodal reset interface.
{"title":"Multimodal Turn in Place: A Comparative Analysis of Visual and Auditory Reset UIs in Redirected Walking.","authors":"Ho Jung Lee, Hyunjeong Kim, In-Kwon Lee","doi":"10.1109/TVCG.2025.3549852","DOIUrl":"10.1109/TVCG.2025.3549852","url":null,"abstract":"<p><p>Resetting in redirected walking (RDW) allows users to maintain a continuous, collision-free walking experience in virtual reality (VR), even in a limited physical space. Since frequent resets reduce the user's sense of immersion, extensive research has been conducted to develop resetters that provide optimal reset directions. Various visual reset user interfaces (UIs) have been proposed to help users perform the correct reset direction according to the improved resetter, but their effectiveness has not been sufficiently verified. In addition, expert interviews conducted to identify the problems in the current reset process revealed that users sometimes fail to recognize the visual reset UI in time. Therefore, we propose a novel visual reset UI using Gauge, which is expected to provide users with an effective and high-quality experience. In Study 1, we demonstrate the effectiveness of the Gauge UI by comparing it to existing UIs (Direction, End Point, and Arrow Alignment). Users of various locomotion techniques, including RDW, inevitably need to perform resets, and in this work we propose a novel paradigm: a combined multimodal reset interface.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549568
Tongyu Nie, Courtney Hutton Pospick, Ville Cantory, Danhua Zhang, Jasmine Joyce DeGuzman, Victoria Interrante, Isayas Berhe Adhanom, Evan Suma Rosenberg
Mitigating cybersickness can improve the usability of virtual reality (VR) and increase its adoption. The most widely used technique, dynamic field-of-view (FOV) restriction, mitigates cybersickness by blacking out the peripheral region of the user's FOV. However, this approach reduces the visibility of the virtual environment. We propose peripheral teleportation, a novel technique that creates a rest frame (RF) in the user's peripheral vision using content rendered from the current virtual environment. Specifically, the peripheral region is rendered by a pair of RF cameras whose transforms are updated by the user's physical motion. We apply alternating teleportations during translations, or snap turns during rotations, to the RF cameras to keep them close to the current viewpoint transformation. Consequently, the optical flow generated by RF cameras matches the user's physical motion, creating a stable peripheral view. In a between-subjects study (N=90), we compared peripheral teleportation with a traditional black FOV restrictor and an unrestricted control condition. The results showed that peripheral teleportation significantly reduced discomfort and enabled participants to stay immersed in the virtual environment for a longer duration of time. Overall, these findings suggest that peripheral teleportation is a promising technique that VR practitioners may consider adding to their cybersickness mitigation toolset.
{"title":"Peripheral Teleportation: A Rest Frame Design to Mitigate Cybersickness During Virtual Locomotion.","authors":"Tongyu Nie, Courtney Hutton Pospick, Ville Cantory, Danhua Zhang, Jasmine Joyce DeGuzman, Victoria Interrante, Isayas Berhe Adhanom, Evan Suma Rosenberg","doi":"10.1109/TVCG.2025.3549568","DOIUrl":"10.1109/TVCG.2025.3549568","url":null,"abstract":"<p><p>Mitigating cybersickness can improve the usability of virtual reality (VR) and increase its adoption. The most widely used technique, dynamic field-of-view (FOV) restriction, mitigates cybersickness by blacking out the peripheral region of the user's FOV. However, this approach reduces the visibility of the virtual environment. We propose peripheral teleportation, a novel technique that creates a rest frame (RF) in the user's peripheral vision using content rendered from the current virtual environment. Specifically, the peripheral region is rendered by a pair of RF cameras whose transforms are updated by the user's physical motion. We apply alternating teleportations during translations, or snap turns during rotations, to the RF cameras to keep them close to the current viewpoint transformation. Consequently, the optical flow generated by RF cameras matches the user's physical motion, creating a stable peripheral view. In a between-subjects study (N=90), we compared peripheral teleportation with a traditional black FOV restrictor and an unrestricted control condition. The results showed that peripheral teleportation significantly reduced discomfort and enabled participants to stay immersed in the virtual environment for a longer duration of time. Overall, these findings suggest that peripheral teleportation is a promising technique that VR practitioners may consider adding to their cybersickness mitigation toolset.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TVCG.2025.3549877
Shiva Pooladvand, Sogand Hasanzadeh, George Takahashi, Kenneth Jongwon Park, Jacob Marroquin
Drawing on social influence and behavioral intention theories, coworkers' risk-taking serves as an "extra motive"-an exogenous factor-for risk-taking behaviors among workers in the workplace. Social influence theories have shown that social factors, such as social pressure and coworker risk-taking, may predict risk-taking behaviors and significantly affect decision-making. While immersive technologies have been widely used to create close-to-real simulations for construction safety-related studies, there is a paucity of research considering the impact of social presence in evaluating workers' risk decision-making within immersive environments. To bridge this gap, this study developed a state-of-the-art Augmented Virtuality (AV) environment to investigate roofers' risk-taking behaviors when exposed to social stressors (working alongside a safe/unsafe peer). In this augmented virtuality environment, a virtual peer with safe and unsafe behaviors was simulated in order to impose peer pressure and increase participants' sense of social presence. Participants were asked to install asphalt shingles on a physical section of a roof (passive haptics) while the rest of the environment was projected virtually. During shingle installation, participants' cognitive and behavioral responses were captured using psychophysiological wearable technologies and self-report measures. The results demonstrated that the developed AV model could successfully enhance participants' sense of presence and social presence while serving as an appropriate platform for assessing individuals' decision-making orientations and behavioral changes in the presence of social stressors. Such information shows the value of immersive technologies to examine the naturalistic responses of individuals without exposing them to actual risks.
{"title":"Simulating Social Pressure: Evaluating Risk Behaviors in Construction Using Augmented Virtuality.","authors":"Shiva Pooladvand, Sogand Hasanzadeh, George Takahashi, Kenneth Jongwon Park, Jacob Marroquin","doi":"10.1109/TVCG.2025.3549877","DOIUrl":"10.1109/TVCG.2025.3549877","url":null,"abstract":"<p><p>Drawing on social influence and behavioral intention theories, coworkers' risk-taking serves as an \"extra motive\"-an exogenous factor-for risk-taking behaviors among workers in the workplace. Social influence theories have shown that social factors, such as social pressure and coworker risk-taking, may predict risk-taking behaviors and significantly affect decision-making. While immersive technologies have been widely used to create close-to-real simulations for construction safety-related studies, there is a paucity of research considering the impact of social presence in evaluating workers' risk decision-making within immersive environments. To bridge this gap, this study developed a state-of-the-art Augmented Virtuality (AV) environment to investigate roofers' risk-taking behaviors when exposed to social stressors (working alongside a safe/unsafe peer). In this augmented virtuality environment, a virtual peer with safe and unsafe behaviors was simulated in order to impose peer pressure and increase participants' sense of social presence. Participants were asked to install asphalt shingles on a physical section of a roof (passive haptics) while the rest of the environment was projected virtually. During shingle installation, participants' cognitive and behavioral responses were captured using psychophysiological wearable technologies and self-report measures. The results demonstrated that the developed AV model could successfully enhance participants' sense of presence and social presence while serving as an appropriate platform for assessing individuals' decision-making orientations and behavioral changes in the presence of social stressors. Such information shows the value of immersive technologies to examine the naturalistic responses of individuals without exposing them to actual risks.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}