首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
From Novelty to Knowledge: A Longitudinal Investigation of the Novelty Effect on Learning Outcomes in Virtual Reality.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549897
Joomi Lee, Chen Chen, Aryabrata Basu

Virtual reality (VR) is increasingly recognized as a powerful educational platform, but the novelty effect-where users experience heightened engagement during initial interactions with new technology-can interfere with learning outcomes. This study investigates how the novelty effect influences learning using a three-wave longitudinal design, tracking changes in information recall and exploratory behavior over three weeks. Our findings reveal that while initial novelty impedes learning, learners' ability to encode educational content improves as they become more familiar with the virtual environment. Additionally, sustained exploratory behavior positively impacts learning over time, reinforcing the importance of active engagement in VR-based education. This study enhances the understanding of VR's long-term educational impact and provides guidance for improving learning effectiveness in immersive learning environments.

{"title":"From Novelty to Knowledge: A Longitudinal Investigation of the Novelty Effect on Learning Outcomes in Virtual Reality.","authors":"Joomi Lee, Chen Chen, Aryabrata Basu","doi":"10.1109/TVCG.2025.3549897","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549897","url":null,"abstract":"<p><p>Virtual reality (VR) is increasingly recognized as a powerful educational platform, but the novelty effect-where users experience heightened engagement during initial interactions with new technology-can interfere with learning outcomes. This study investigates how the novelty effect influences learning using a three-wave longitudinal design, tracking changes in information recall and exploratory behavior over three weeks. Our findings reveal that while initial novelty impedes learning, learners' ability to encode educational content improves as they become more familiar with the virtual environment. Additionally, sustained exploratory behavior positively impacts learning over time, reinforcing the importance of active engagement in VR-based education. This study enhances the understanding of VR's long-term educational impact and provides guidance for improving learning effectiveness in immersive learning environments.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HIPS - A Surgical Virtual Reality Training System for Total Hip Arthroplasty (THA) with Realistic Force Feedback.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549896
Mario Lorenz, Maximilian Kaluschke, Annegret Melzer, Nina Pillen, Magdalena Sanrow, Andrea Hoffmann, Dennis Schmidt, Andre Dettmann, Angelika C Bullinger, Jerome Perret, Gabriel Zachmann

Virtual reality training simulations to acquire surgical skills are important for increasing patient safety and save valuable resources, e.g., cadavers, supervision and operating room time. However, as surgery is a craft, simulators must not only provide a high degree of visual realism, but especially a realistic haptic behavior. While such simulators exist for surgeries like laparoscopy or arthroscopy, other surgical fields, especially where large forces need to be exerted, like total hip arthroplasty (THA; implantation of a hip joint protheses), lack realistic VR training simulations. In this paper we present for the first time a novel VR training simulation for the five steps of THA (from femur head resection to stem implantation) with realis-tic haptic feedback. To achieve this, a novel haptic hammering device, an upgraded version of the Virtuose 6D haptic device from Haption, novel algorithms for collision detection, haptic rendering, and material removal are introduced. In a study with 17 surgeons of diverse experience levels, we confirmed the realism, usefulness and usability of our novel methods.

{"title":"HIPS - A Surgical Virtual Reality Training System for Total Hip Arthroplasty (THA) with Realistic Force Feedback.","authors":"Mario Lorenz, Maximilian Kaluschke, Annegret Melzer, Nina Pillen, Magdalena Sanrow, Andrea Hoffmann, Dennis Schmidt, Andre Dettmann, Angelika C Bullinger, Jerome Perret, Gabriel Zachmann","doi":"10.1109/TVCG.2025.3549896","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549896","url":null,"abstract":"<p><p>Virtual reality training simulations to acquire surgical skills are important for increasing patient safety and save valuable resources, e.g., cadavers, supervision and operating room time. However, as surgery is a craft, simulators must not only provide a high degree of visual realism, but especially a realistic haptic behavior. While such simulators exist for surgeries like laparoscopy or arthroscopy, other surgical fields, especially where large forces need to be exerted, like total hip arthroplasty (THA; implantation of a hip joint protheses), lack realistic VR training simulations. In this paper we present for the first time a novel VR training simulation for the five steps of THA (from femur head resection to stem implantation) with realis-tic haptic feedback. To achieve this, a novel haptic hammering device, an upgraded version of the Virtuose 6D haptic device from Haption, novel algorithms for collision detection, haptic rendering, and material removal are introduced. In a study with 17 surgeons of diverse experience levels, we confirmed the realism, usefulness and usability of our novel methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArmVR: Innovative Design Combining Virtual Reality Technology and Mechanical Equipment in Stroke Rehabilitation Therapy.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549561
Jing Qu, Lingguo Bu, Zhongxin Chen, Yalu Jin, Lei Zhao, Shantong Zhu, Fenghe Guo

The rising incidence of stroke has created a significant global public health challenge. The immersive qualities of virtual reality (VR) technology, along with its distinct advantages, make it a promising tool for stroke rehabilitation. To address this challenge, developing VR-based upper limb rehabilitation systems has become a critical research focus. This study developed and evaluated an innovative ArmVR system that combines VR technology with rehabilitation hardware to improve recovery outcomes for stroke patients. Through comprehensive assessments, including neurofeedback, pressure feedback, and subjective feedback, the results suggest that VR technology has the potential to positively support the recovery of cognitive and motor functions. Different VR environments affect rehabilitation outcomes: forest scenarios aid emotional relaxation, while city scenarios better activate motor centers in stroke patients. The study also identified variations in responses among different user groups. Normal users showed significant changes in cognitive function, whereas stroke patients primarily experienced motor function recovery. These findings suggest that VR-integrated rehabilitation systems possess great potential, and personalized design can further enhance recovery outcomes, meet diverse patient needs, and ultimately improve quality of life.

{"title":"ArmVR: Innovative Design Combining Virtual Reality Technology and Mechanical Equipment in Stroke Rehabilitation Therapy.","authors":"Jing Qu, Lingguo Bu, Zhongxin Chen, Yalu Jin, Lei Zhao, Shantong Zhu, Fenghe Guo","doi":"10.1109/TVCG.2025.3549561","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549561","url":null,"abstract":"<p><p>The rising incidence of stroke has created a significant global public health challenge. The immersive qualities of virtual reality (VR) technology, along with its distinct advantages, make it a promising tool for stroke rehabilitation. To address this challenge, developing VR-based upper limb rehabilitation systems has become a critical research focus. This study developed and evaluated an innovative ArmVR system that combines VR technology with rehabilitation hardware to improve recovery outcomes for stroke patients. Through comprehensive assessments, including neurofeedback, pressure feedback, and subjective feedback, the results suggest that VR technology has the potential to positively support the recovery of cognitive and motor functions. Different VR environments affect rehabilitation outcomes: forest scenarios aid emotional relaxation, while city scenarios better activate motor centers in stroke patients. The study also identified variations in responses among different user groups. Normal users showed significant changes in cognitive function, whereas stroke patients primarily experienced motor function recovery. These findings suggest that VR-integrated rehabilitation systems possess great potential, and personalized design can further enhance recovery outcomes, meet diverse patient needs, and ultimately improve quality of life.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Approach to Mesh Saliency: Evaluating Textured and Non-Textured Meshes Through VR and Multifunctional Prediction. 网格显著性统一方法:通过虚拟现实和多功能预测评估纹理和非纹理网格
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549550
Kaiwei Zhang, Dandan Zhu, Xiongkuo Min, Guangtao Zhai

Mesh saliency aims to empower artificial intelligence with strong adaptability to highlight regions that naturally attract visual attention. Existing advances primarily emphasize the crucial role of geometric shapes in determining mesh saliency, but it remains challenging to flexibly sense the unique visual appeal brought by the realism of complex texture patterns. To investigate the interaction between geometric shapes and texture features in visual perception, we establish a comprehensive mesh saliency dataset, capturing saliency distributions for identical 3D models under both non-textured and textured conditions. Additionally, we propose a unified saliency prediction model applicable to various mesh types, providing valuable insights for both detailed modeling and realistic rendering applications. This model effectively analyzes the geometric structure of the mesh while seamlessly incorporating texture features into the topological framework, ensuring coherence throughout appearance-enhanced modeling. Through extensive theoretical and empirical validation, our approach not only enhances performance across different mesh types, but also demonstrates the model's scalability and generalizability, particularly through cross-validation of various visual features.

网格突出旨在赋予人工智能强大的适应能力,以突出自然吸引视觉注意力的区域。现有研究主要强调几何形状在确定网格显著性中的关键作用,但如何灵活地感知复杂纹理图案的真实感所带来的独特视觉吸引力仍是一项挑战。为了研究几何形状和纹理特征在视觉感知中的相互作用,我们建立了一个全面的网格突出度数据集,捕捉了相同三维模型在无纹理和有纹理条件下的突出度分布。此外,我们还提出了适用于各种网格类型的统一突出度预测模型,为详细建模和现实渲染应用提供了宝贵的见解。该模型可有效分析网格的几何结构,同时将纹理特征无缝纳入拓扑框架,确保整个外观增强建模的一致性。通过广泛的理论和经验验证,我们的方法不仅提高了不同网格类型的性能,还证明了该模型的可扩展性和通用性,特别是通过对各种视觉特征的交叉验证。
{"title":"Unified Approach to Mesh Saliency: Evaluating Textured and Non-Textured Meshes Through VR and Multifunctional Prediction.","authors":"Kaiwei Zhang, Dandan Zhu, Xiongkuo Min, Guangtao Zhai","doi":"10.1109/TVCG.2025.3549550","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549550","url":null,"abstract":"<p><p>Mesh saliency aims to empower artificial intelligence with strong adaptability to highlight regions that naturally attract visual attention. Existing advances primarily emphasize the crucial role of geometric shapes in determining mesh saliency, but it remains challenging to flexibly sense the unique visual appeal brought by the realism of complex texture patterns. To investigate the interaction between geometric shapes and texture features in visual perception, we establish a comprehensive mesh saliency dataset, capturing saliency distributions for identical 3D models under both non-textured and textured conditions. Additionally, we propose a unified saliency prediction model applicable to various mesh types, providing valuable insights for both detailed modeling and realistic rendering applications. This model effectively analyzes the geometric structure of the mesh while seamlessly incorporating texture features into the topological framework, ensuring coherence throughout appearance-enhanced modeling. Through extensive theoretical and empirical validation, our approach not only enhances performance across different mesh types, but also demonstrates the model's scalability and generalizability, particularly through cross-validation of various visual features.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hit Around: Substitutional Moving Robot for Immersive and Exertion Interaction with Encountered-Type Haptic.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549556
Yu-Hsiang Weng, Ping-Hsuan Han, Kuan-Ning Chang, Chi-Yu Lin, Chia-Hui Lin, Ho Yin Ng, Chien-Hsing Chou, Wen-Hsin Chiu

Previous works have shown the potential of immersive technologies to make physical activities a more engaging experience. With encountered-type haptic feedback, users can perceive a more realistic sensation for exertion interaction in substitutions reality. Although substitutional reality has utilized physical environments, props, and devices to provide encountered-type haptic feedback, these cannot withstand the fierce force of humans and do not give feedback when users move around simultaneously, such as in combat sports. In this work, we present Hit Around, a substitutional moving robot for immersive and exertion interaction, in which the user can move and punch the virtual opponent and perceive encountered-type haptic feedback anywhere. We gathered insight into immersive exertion interaction from three exhibitions with iterative prototypes, then designed and implemented the hardware system and application. To understand the ability of mobility and weight loading, we conducted two technical evaluations and a laboratory experiment to validate the feasibility. Finally, a field deployment study explored the limitations and challenges of developing immersive exertion interaction with encountered-type haptics.

{"title":"Hit Around: Substitutional Moving Robot for Immersive and Exertion Interaction with Encountered-Type Haptic.","authors":"Yu-Hsiang Weng, Ping-Hsuan Han, Kuan-Ning Chang, Chi-Yu Lin, Chia-Hui Lin, Ho Yin Ng, Chien-Hsing Chou, Wen-Hsin Chiu","doi":"10.1109/TVCG.2025.3549556","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549556","url":null,"abstract":"<p><p>Previous works have shown the potential of immersive technologies to make physical activities a more engaging experience. With encountered-type haptic feedback, users can perceive a more realistic sensation for exertion interaction in substitutions reality. Although substitutional reality has utilized physical environments, props, and devices to provide encountered-type haptic feedback, these cannot withstand the fierce force of humans and do not give feedback when users move around simultaneously, such as in combat sports. In this work, we present Hit Around, a substitutional moving robot for immersive and exertion interaction, in which the user can move and punch the virtual opponent and perceive encountered-type haptic feedback anywhere. We gathered insight into immersive exertion interaction from three exhibitions with iterative prototypes, then designed and implemented the hardware system and application. To understand the ability of mobility and weight loading, we conducted two technical evaluations and a laboratory experiment to validate the feasibility. Finally, a field deployment study explored the limitations and challenges of developing immersive exertion interaction with encountered-type haptics.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable XR: Understanding User Behaviors of XR Environments Using LLM-assisted Analytics Framework.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549537
Yoonsang Kim, Zainab Aamir, Mithilesh Singh, Saeed Boorboor, Klaus Mueller, Arie E Kaufman

We present Explainable XR, an end-to-end framework for analyzing user behavior in diverse eXtended Reality (XR) environments by leveraging Large Language Models (LLMs) for data interpretation assistance. Existing XR user analytics frameworks face challenges in handling cross-virtuality - AR, VR, MR - transitions, multi-user collaborative application scenarios, and the complexity of multimodal data. Explainable XR addresses these challenges by providing a virtuality-agnostic solution for the collection, analysis, and visualization of immersive sessions. We propose three main components in our framework: (1) A novel user data recording schema, called User Action Descriptor (UAD), that can capture the users' multimodal actions, along with their intents and the contexts; (2) a platform-agnostic XR session recorder, and (3) a visual analytics interface that offers LLM-assisted insights tailored to the analysts' perspectives, facilitating the exploration and analysis of the recorded XR session data. We demonstrate the versatility of Explainable XR by demonstrating five use-case scenarios, in both individual and collaborative XR applications across virtualities. Our technical evaluation and user studies show that Explainable XR provides a highly usable analytics solution for understanding user actions and delivering multifaceted, actionable insights into user behaviors in immersive environments.

{"title":"Explainable XR: Understanding User Behaviors of XR Environments Using LLM-assisted Analytics Framework.","authors":"Yoonsang Kim, Zainab Aamir, Mithilesh Singh, Saeed Boorboor, Klaus Mueller, Arie E Kaufman","doi":"10.1109/TVCG.2025.3549537","DOIUrl":"10.1109/TVCG.2025.3549537","url":null,"abstract":"<p><p>We present Explainable XR, an end-to-end framework for analyzing user behavior in diverse eXtended Reality (XR) environments by leveraging Large Language Models (LLMs) for data interpretation assistance. Existing XR user analytics frameworks face challenges in handling cross-virtuality - AR, VR, MR - transitions, multi-user collaborative application scenarios, and the complexity of multimodal data. Explainable XR addresses these challenges by providing a virtuality-agnostic solution for the collection, analysis, and visualization of immersive sessions. We propose three main components in our framework: (1) A novel user data recording schema, called User Action Descriptor (UAD), that can capture the users' multimodal actions, along with their intents and the contexts; (2) a platform-agnostic XR session recorder, and (3) a visual analytics interface that offers LLM-assisted insights tailored to the analysts' perspectives, facilitating the exploration and analysis of the recorded XR session data. We demonstrate the versatility of Explainable XR by demonstrating five use-case scenarios, in both individual and collaborative XR applications across virtualities. Our technical evaluation and user studies show that Explainable XR provides a highly usable analytics solution for understanding user actions and delivering multifaceted, actionable insights into user behaviors in immersive environments.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating 3D Visual Comparison Techniques for Change Detection in Virtual Reality. 评估用于虚拟现实中变化检测的 3D 视觉对比技术。
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549578
Changrui Zhu, Ernst Kruijff, Vijay M Pawar, Simon Julier

Change detection (CD) is critical in everyday tasks. While current algorithmic approaches for CD are improving, they remain imprecise, often requiring human intervention. Cognitive science research focuses on understanding CD mechanisms, especially through change blindness studies. However, these do not address the primary requirement in real-life CD - detecting changes as effectively as possible. Such a requirement is directly relevant to the visual comparison field - studying visualisation techniques to compare data and identify differences or changes effectively. Recent studies have used Virtual Reality (VR) to improve visual comparison by providing an immersive platform where users can interact with 3D data at a real-life scale, enhancing spatial reasoning. We believe VR could also improve CD performance accordingly. Particularly, VR offers stereoscopic depth perception over traditional displays, potentially enhancing the detection of spatial change. In this paper, we develop and analyse three 3D visual comparison techniques for CD in VR: Sliding Window, 3D Slider, and Switch Back. These techniques are evaluated under synthetic but realistic environments and frequently occurring Perceptual Challenges, including different Changed Object Size, Lighting Variation, and Scene Drift conditions. Experimental results reveal significant differences between the techniques in detection time measures and subjective user experience.

{"title":"Evaluating 3D Visual Comparison Techniques for Change Detection in Virtual Reality.","authors":"Changrui Zhu, Ernst Kruijff, Vijay M Pawar, Simon Julier","doi":"10.1109/TVCG.2025.3549578","DOIUrl":"10.1109/TVCG.2025.3549578","url":null,"abstract":"<p><p>Change detection (CD) is critical in everyday tasks. While current algorithmic approaches for CD are improving, they remain imprecise, often requiring human intervention. Cognitive science research focuses on understanding CD mechanisms, especially through change blindness studies. However, these do not address the primary requirement in real-life CD - detecting changes as effectively as possible. Such a requirement is directly relevant to the visual comparison field - studying visualisation techniques to compare data and identify differences or changes effectively. Recent studies have used Virtual Reality (VR) to improve visual comparison by providing an immersive platform where users can interact with 3D data at a real-life scale, enhancing spatial reasoning. We believe VR could also improve CD performance accordingly. Particularly, VR offers stereoscopic depth perception over traditional displays, potentially enhancing the detection of spatial change. In this paper, we develop and analyse three 3D visual comparison techniques for CD in VR: Sliding Window, 3D Slider, and Switch Back. These techniques are evaluated under synthetic but realistic environments and frequently occurring Perceptual Challenges, including different Changed Object Size, Lighting Variation, and Scene Drift conditions. Experimental results reveal significant differences between the techniques in detection time measures and subjective user experience.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Turn in Place: A Comparative Analysis of Visual and Auditory Reset UIs in Redirected Walking.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549852
Ho Jung Lee, Hyunjeong Kim, In-Kwon Lee

Resetting in redirected walking (RDW) allows users to maintain a continuous, collision-free walking experience in virtual reality (VR), even in a limited physical space. Since frequent resets reduce the user's sense of immersion, extensive research has been conducted to develop resetters that provide optimal reset directions. Various visual reset user interfaces (UIs) have been proposed to help users perform the correct reset direction according to the improved resetter, but their effectiveness has not been sufficiently verified. In addition, expert interviews conducted to identify the problems in the current reset process revealed that users sometimes fail to recognize the visual reset UI in time. Therefore, we propose a novel visual reset UI using Gauge, which is expected to provide users with an effective and high-quality experience. In Study 1, we demonstrate the effectiveness of the Gauge UI by comparing it to existing UIs (Direction, End Point, and Arrow Alignment). Users of various locomotion techniques, including RDW, inevitably need to perform resets, and in this work we propose a novel paradigm: a combined multimodal reset interface.

{"title":"Multimodal Turn in Place: A Comparative Analysis of Visual and Auditory Reset UIs in Redirected Walking.","authors":"Ho Jung Lee, Hyunjeong Kim, In-Kwon Lee","doi":"10.1109/TVCG.2025.3549852","DOIUrl":"10.1109/TVCG.2025.3549852","url":null,"abstract":"<p><p>Resetting in redirected walking (RDW) allows users to maintain a continuous, collision-free walking experience in virtual reality (VR), even in a limited physical space. Since frequent resets reduce the user's sense of immersion, extensive research has been conducted to develop resetters that provide optimal reset directions. Various visual reset user interfaces (UIs) have been proposed to help users perform the correct reset direction according to the improved resetter, but their effectiveness has not been sufficiently verified. In addition, expert interviews conducted to identify the problems in the current reset process revealed that users sometimes fail to recognize the visual reset UI in time. Therefore, we propose a novel visual reset UI using Gauge, which is expected to provide users with an effective and high-quality experience. In Study 1, we demonstrate the effectiveness of the Gauge UI by comparing it to existing UIs (Direction, End Point, and Arrow Alignment). Users of various locomotion techniques, including RDW, inevitably need to perform resets, and in this work we propose a novel paradigm: a combined multimodal reset interface.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Peripheral Teleportation: A Rest Frame Design to Mitigate Cybersickness During Virtual Locomotion.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549568
Tongyu Nie, Courtney Hutton Pospick, Ville Cantory, Danhua Zhang, Jasmine Joyce DeGuzman, Victoria Interrante, Isayas Berhe Adhanom, Evan Suma Rosenberg

Mitigating cybersickness can improve the usability of virtual reality (VR) and increase its adoption. The most widely used technique, dynamic field-of-view (FOV) restriction, mitigates cybersickness by blacking out the peripheral region of the user's FOV. However, this approach reduces the visibility of the virtual environment. We propose peripheral teleportation, a novel technique that creates a rest frame (RF) in the user's peripheral vision using content rendered from the current virtual environment. Specifically, the peripheral region is rendered by a pair of RF cameras whose transforms are updated by the user's physical motion. We apply alternating teleportations during translations, or snap turns during rotations, to the RF cameras to keep them close to the current viewpoint transformation. Consequently, the optical flow generated by RF cameras matches the user's physical motion, creating a stable peripheral view. In a between-subjects study (N=90), we compared peripheral teleportation with a traditional black FOV restrictor and an unrestricted control condition. The results showed that peripheral teleportation significantly reduced discomfort and enabled participants to stay immersed in the virtual environment for a longer duration of time. Overall, these findings suggest that peripheral teleportation is a promising technique that VR practitioners may consider adding to their cybersickness mitigation toolset.

{"title":"Peripheral Teleportation: A Rest Frame Design to Mitigate Cybersickness During Virtual Locomotion.","authors":"Tongyu Nie, Courtney Hutton Pospick, Ville Cantory, Danhua Zhang, Jasmine Joyce DeGuzman, Victoria Interrante, Isayas Berhe Adhanom, Evan Suma Rosenberg","doi":"10.1109/TVCG.2025.3549568","DOIUrl":"10.1109/TVCG.2025.3549568","url":null,"abstract":"<p><p>Mitigating cybersickness can improve the usability of virtual reality (VR) and increase its adoption. The most widely used technique, dynamic field-of-view (FOV) restriction, mitigates cybersickness by blacking out the peripheral region of the user's FOV. However, this approach reduces the visibility of the virtual environment. We propose peripheral teleportation, a novel technique that creates a rest frame (RF) in the user's peripheral vision using content rendered from the current virtual environment. Specifically, the peripheral region is rendered by a pair of RF cameras whose transforms are updated by the user's physical motion. We apply alternating teleportations during translations, or snap turns during rotations, to the RF cameras to keep them close to the current viewpoint transformation. Consequently, the optical flow generated by RF cameras matches the user's physical motion, creating a stable peripheral view. In a between-subjects study (N=90), we compared peripheral teleportation with a traditional black FOV restrictor and an unrestricted control condition. The results showed that peripheral teleportation significantly reduced discomfort and enabled participants to stay immersed in the virtual environment for a longer duration of time. Overall, these findings suggest that peripheral teleportation is a promising technique that VR practitioners may consider adding to their cybersickness mitigation toolset.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulating Social Pressure: Evaluating Risk Behaviors in Construction Using Augmented Virtuality.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549877
Shiva Pooladvand, Sogand Hasanzadeh, George Takahashi, Kenneth Jongwon Park, Jacob Marroquin

Drawing on social influence and behavioral intention theories, coworkers' risk-taking serves as an "extra motive"-an exogenous factor-for risk-taking behaviors among workers in the workplace. Social influence theories have shown that social factors, such as social pressure and coworker risk-taking, may predict risk-taking behaviors and significantly affect decision-making. While immersive technologies have been widely used to create close-to-real simulations for construction safety-related studies, there is a paucity of research considering the impact of social presence in evaluating workers' risk decision-making within immersive environments. To bridge this gap, this study developed a state-of-the-art Augmented Virtuality (AV) environment to investigate roofers' risk-taking behaviors when exposed to social stressors (working alongside a safe/unsafe peer). In this augmented virtuality environment, a virtual peer with safe and unsafe behaviors was simulated in order to impose peer pressure and increase participants' sense of social presence. Participants were asked to install asphalt shingles on a physical section of a roof (passive haptics) while the rest of the environment was projected virtually. During shingle installation, participants' cognitive and behavioral responses were captured using psychophysiological wearable technologies and self-report measures. The results demonstrated that the developed AV model could successfully enhance participants' sense of presence and social presence while serving as an appropriate platform for assessing individuals' decision-making orientations and behavioral changes in the presence of social stressors. Such information shows the value of immersive technologies to examine the naturalistic responses of individuals without exposing them to actual risks.

{"title":"Simulating Social Pressure: Evaluating Risk Behaviors in Construction Using Augmented Virtuality.","authors":"Shiva Pooladvand, Sogand Hasanzadeh, George Takahashi, Kenneth Jongwon Park, Jacob Marroquin","doi":"10.1109/TVCG.2025.3549877","DOIUrl":"10.1109/TVCG.2025.3549877","url":null,"abstract":"<p><p>Drawing on social influence and behavioral intention theories, coworkers' risk-taking serves as an \"extra motive\"-an exogenous factor-for risk-taking behaviors among workers in the workplace. Social influence theories have shown that social factors, such as social pressure and coworker risk-taking, may predict risk-taking behaviors and significantly affect decision-making. While immersive technologies have been widely used to create close-to-real simulations for construction safety-related studies, there is a paucity of research considering the impact of social presence in evaluating workers' risk decision-making within immersive environments. To bridge this gap, this study developed a state-of-the-art Augmented Virtuality (AV) environment to investigate roofers' risk-taking behaviors when exposed to social stressors (working alongside a safe/unsafe peer). In this augmented virtuality environment, a virtual peer with safe and unsafe behaviors was simulated in order to impose peer pressure and increase participants' sense of social presence. Participants were asked to install asphalt shingles on a physical section of a roof (passive haptics) while the rest of the environment was projected virtually. During shingle installation, participants' cognitive and behavioral responses were captured using psychophysiological wearable technologies and self-report measures. The results demonstrated that the developed AV model could successfully enhance participants' sense of presence and social presence while serving as an appropriate platform for assessing individuals' decision-making orientations and behavioral changes in the presence of social stressors. Such information shows the value of immersive technologies to examine the naturalistic responses of individuals without exposing them to actual risks.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1