Pub Date : 2024-09-10DOI: 10.1109/TVCG.2024.3456194
Samuel Ang;John Quarles
Cybersickness continues to bar many individuals from taking full advantage of virtual reality (VR) technology. Previous work has established that navigating virtual terrain with elevation changes poses a significant risk in this regard. In this paper, we investigate the effectiveness of three cybersickness reduction strategies on users performing a navigation task across virtual elevation-altering terrain. These strategies include static field of view (FOV) reduction, a flat surface approach that disables terrain collision and maintains constant elevation for users, and SmoothRide, a novel technique designed to dampen a user's perception of vertical motion as they travel. To assess the impact of these strategies, we conducted a within-subjects study involving 61 participants. Each strategy was compared against a control condition, where users navigated across terrain without any cybersickness reduction measures in place. Cybersickness data were collected using the Fast Motion Sickness Scale (FMS) and Simulator Sickness Questionnaire (SSQ), along with galvanic skin response (GSR) data. We measured user presence using the IGroup Presence questionnaire (IPQ) and a Single-Item Presence Scale (SIP). Our findings reveal that users experienced significantly lower levels of cybersickness using SmoothRide or FOV reduction. Presence scores reported on the IPQ were statistically similar between SmoothRide and the control condition. Conversely, terrain flattening had adverse effects on user presence scores, and we could not identify a significant effect on cybersickness compared to the control. We demonstrate that SmoothRide is an effective, lightweight, configurable, and easy-to-integrate tool for reducing cybersickness in simulations featuring elevation-altering terrain.
{"title":"SmoothRide: A Versatile Solution to Combat Cybersickness in Elevation-Altering Environments","authors":"Samuel Ang;John Quarles","doi":"10.1109/TVCG.2024.3456194","DOIUrl":"10.1109/TVCG.2024.3456194","url":null,"abstract":"Cybersickness continues to bar many individuals from taking full advantage of virtual reality (VR) technology. Previous work has established that navigating virtual terrain with elevation changes poses a significant risk in this regard. In this paper, we investigate the effectiveness of three cybersickness reduction strategies on users performing a navigation task across virtual elevation-altering terrain. These strategies include static field of view (FOV) reduction, a flat surface approach that disables terrain collision and maintains constant elevation for users, and SmoothRide, a novel technique designed to dampen a user's perception of vertical motion as they travel. To assess the impact of these strategies, we conducted a within-subjects study involving 61 participants. Each strategy was compared against a control condition, where users navigated across terrain without any cybersickness reduction measures in place. Cybersickness data were collected using the Fast Motion Sickness Scale (FMS) and Simulator Sickness Questionnaire (SSQ), along with galvanic skin response (GSR) data. We measured user presence using the IGroup Presence questionnaire (IPQ) and a Single-Item Presence Scale (SIP). Our findings reveal that users experienced significantly lower levels of cybersickness using SmoothRide or FOV reduction. Presence scores reported on the IPQ were statistically similar between SmoothRide and the control condition. Conversely, terrain flattening had adverse effects on user presence scores, and we could not identify a significant effect on cybersickness compared to the control. We demonstrate that SmoothRide is an effective, lightweight, configurable, and easy-to-integrate tool for reducing cybersickness in simulations featuring elevation-altering terrain.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7152-7161"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1109/TVCG.2024.3456173
Jingjing Zhang;Mengjie Huang;Yonglin Chen;Kai-Lun Liao;Jiajia Shi;Hai-Ning Liang;Rui Yang
The use of Virtual Reality (VR) technology, especially in medical rehabilitation, has expanded to include tactile cues along with visual stimuli. For patients with upper limb hemiplegia, tangible handles with haptic stimuli could improve their ability to perform daily activities. Traditional VR controllers are unsuitable for patient rehabilitation in VR, necessitating the design of specialized tangible handles with integrated tracking devices. Besides, matching tactile stimulation with corresponding virtual visuals could strengthen users' embodiment (i.e., owning and controlling virtual bodies) in VR, which is crucial for patients' training with virtual hands. Haptic stimuli have been shown to amplify the embodiment in VR, whereas the effect of partial tactile stimulation from tangible handles on embodiment remains to be clarified. This research, including three experiments, aims to investigate how partial tactile feedback of tangible handles impacts users' embodiment, and we proposed a design concept called TouchMark for partial tactile stimuli that could help users quickly connect the physical and virtual worlds. To evaluate users' tactile and comfort perceptions when grasping tangible handles in a non-VR setting, various handles with three partial tactile factors were manipulated in Study 1. In Study 2, we explored the effects of partial feedback using three forms of TouchMark on the embodiment of healthy users in VR, with various tangible handles, while Study 3 focused on similar investigations with patients. These handles were utilized to complete virtual food preparation tasks. The tactile and comfort perceptions of tangible handles and users' embodiment were evaluated in this research using questionnaires and interviews. The results indicate that TouchMark with haptic line and ring forms over no stimulation would significantly enhance users' embodiment, especially for patients. The low-cost and innovative TouchMark approach may assist users, particularly those with limited VR experience, in achieving the embodiment and enhancing their virtual interactive experience.
{"title":"TouchMark: Partial Tactile Feedback Design for Upper Limb Rehabilitation in Virtual Reality","authors":"Jingjing Zhang;Mengjie Huang;Yonglin Chen;Kai-Lun Liao;Jiajia Shi;Hai-Ning Liang;Rui Yang","doi":"10.1109/TVCG.2024.3456173","DOIUrl":"10.1109/TVCG.2024.3456173","url":null,"abstract":"The use of Virtual Reality (VR) technology, especially in medical rehabilitation, has expanded to include tactile cues along with visual stimuli. For patients with upper limb hemiplegia, tangible handles with haptic stimuli could improve their ability to perform daily activities. Traditional VR controllers are unsuitable for patient rehabilitation in VR, necessitating the design of specialized tangible handles with integrated tracking devices. Besides, matching tactile stimulation with corresponding virtual visuals could strengthen users' embodiment (i.e., owning and controlling virtual bodies) in VR, which is crucial for patients' training with virtual hands. Haptic stimuli have been shown to amplify the embodiment in VR, whereas the effect of partial tactile stimulation from tangible handles on embodiment remains to be clarified. This research, including three experiments, aims to investigate how partial tactile feedback of tangible handles impacts users' embodiment, and we proposed a design concept called TouchMark for partial tactile stimuli that could help users quickly connect the physical and virtual worlds. To evaluate users' tactile and comfort perceptions when grasping tangible handles in a non-VR setting, various handles with three partial tactile factors were manipulated in Study 1. In Study 2, we explored the effects of partial feedback using three forms of TouchMark on the embodiment of healthy users in VR, with various tangible handles, while Study 3 focused on similar investigations with patients. These handles were utilized to complete virtual food preparation tasks. The tactile and comfort perceptions of tangible handles and users' embodiment were evaluated in this research using questionnaires and interviews. The results indicate that TouchMark with haptic line and ring forms over no stimulation would significantly enhance users' embodiment, especially for patients. The low-cost and innovative TouchMark approach may assist users, particularly those with limited VR experience, in achieving the embodiment and enhancing their virtual interactive experience.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7430-7440"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1109/TVCG.2024.3456139
Christopher You;Roshan Venkatakrishnan;Rohith Venkatakrishnan;Zhuoming Han;Benjamin Lok;Tabitha Peck
Control over an avatar in virtual reality can improve one's perceived sense of agency and embodiment towards their avatar. Yet, the relationship between control on agency and embodiment remains unclear. This work aims to investigate two main ideas: (1) the effectiveness of currently used metrics in measuring agency and embodiment and (2) the relationship between different levels of control on agency, embodiment, and cognitive performance. To do this, we conducted a between-participants user study with three conditions on agency ($mathrm{n}=57$). Participants embodied an avatar with one of three types of control (i.e., Low - control over head only, Medium - control over head and torso, or High - control over head, torso, and arms) and completed a Stroop test. Our results indicate that the degree of control afforded to participants impacted their embodiment and cognitive performance but, as expected, could not be detected in the self-reported agency scores. Furthermore, our results elucidated further insights into the relationship between control and embodiment, suggesting potential uncanny valley-like effects. Future work should aim to refine agency measures to better capture the effect of differing levels of control and consider other methodologies to measure agency.
{"title":"A Sense of Urgency on the Sense of Agency: Challenges in Evaluating Agency and Embodiment in Virtual Reality","authors":"Christopher You;Roshan Venkatakrishnan;Rohith Venkatakrishnan;Zhuoming Han;Benjamin Lok;Tabitha Peck","doi":"10.1109/TVCG.2024.3456139","DOIUrl":"10.1109/TVCG.2024.3456139","url":null,"abstract":"Control over an avatar in virtual reality can improve one's perceived sense of agency and embodiment towards their avatar. Yet, the relationship between control on agency and embodiment remains unclear. This work aims to investigate two main ideas: (1) the effectiveness of currently used metrics in measuring agency and embodiment and (2) the relationship between different levels of control on agency, embodiment, and cognitive performance. To do this, we conducted a between-participants user study with three conditions on agency ($mathrm{n}=57$). Participants embodied an avatar with one of three types of control (i.e., Low - control over head only, Medium - control over head and torso, or High - control over head, torso, and arms) and completed a Stroop test. Our results indicate that the degree of control afforded to participants impacted their embodiment and cognitive performance but, as expected, could not be detected in the self-reported agency scores. Furthermore, our results elucidated further insights into the relationship between control and embodiment, suggesting potential uncanny valley-like effects. Future work should aim to refine agency measures to better capture the effect of differing levels of control and consider other methodologies to measure agency.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7172-7182"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1109/TVCG.2024.3456146
Anton Lammert;Gareth Rendle;Felix Immohr;Annika Neidhardt;Karlheinz Brandenburg;Alexander Raake;Bernd Froehlich
Virtual Reality (VR) has become an important tool for conducting behavioral studies in realistic, reproducible environments. In this paper, we present ISA, an Immersive Study Analyzer system designed for the comprehensive analysis of social VR studies. For in-depth analysis of participant behavior, ISA records all user actions, speech, and the contextual environment of social VR studies. A key feature is the ability to review and analyze such immersive recordings collaboratively in VR, through support of behavioral coding and user-defined analysis queries for efficient identification of complex behavior. Respatialization of the recorded audio streams enables analysts to follow study participants' conversations in a natural and intuitive way. To support phases of close and loosely coupled collaboration, ISA allows joint and individual temporal navigation, and provides tools to facilitate collaboration among users at different temporal positions. An expert review confirms that ISA effectively supports collaborative immersive analysis, providing a novel and effective tool for nuanced understanding of user behavior in social VR studies.
虚拟现实(VR)已成为在逼真、可重现的环境中开展行为研究的重要工具。本文介绍的 ISA 是一种沉浸式研究分析系统,专为全面分析社会虚拟现实研究而设计。为了对参与者的行为进行深入分析,ISA 记录了所有用户行为、语音和社会 VR 研究的上下文环境。该系统的一个主要特点是,通过支持行为编码和用户定义的分析查询,能够在 VR 中协同审查和分析这些沉浸式录音,从而有效识别复杂的行为。录音音频流的空间化使分析人员能够以自然直观的方式跟踪研究参与者的对话。为了支持紧密和松散耦合的协作阶段,ISA 允许联合和单独的时空导航,并提供工具来促进不同时空位置的用户之间的协作。专家评审证实,ISA 可有效支持协作式沉浸分析,为在社会虚拟现实研究中细致入微地了解用户行为提供了一种新颖而有效的工具。
{"title":"Immersive Study Analyzer: Collaborative Immersive Analysis of Recorded Social VR Studies","authors":"Anton Lammert;Gareth Rendle;Felix Immohr;Annika Neidhardt;Karlheinz Brandenburg;Alexander Raake;Bernd Froehlich","doi":"10.1109/TVCG.2024.3456146","DOIUrl":"10.1109/TVCG.2024.3456146","url":null,"abstract":"Virtual Reality (VR) has become an important tool for conducting behavioral studies in realistic, reproducible environments. In this paper, we present ISA, an Immersive Study Analyzer system designed for the comprehensive analysis of social VR studies. For in-depth analysis of participant behavior, ISA records all user actions, speech, and the contextual environment of social VR studies. A key feature is the ability to review and analyze such immersive recordings collaboratively in VR, through support of behavioral coding and user-defined analysis queries for efficient identification of complex behavior. Respatialization of the recorded audio streams enables analysts to follow study participants' conversations in a natural and intuitive way. To support phases of close and loosely coupled collaboration, ISA allows joint and individual temporal navigation, and provides tools to facilitate collaboration among users at different temporal positions. An expert review confirms that ISA effectively supports collaborative immersive analysis, providing a novel and effective tool for nuanced understanding of user behavior in social VR studies.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7214-7224"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1109/TVCG.2024.3456342
Kaiyuan Tang, Chaoli Wang
In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.
{"title":"StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization.","authors":"Kaiyuan Tang, Chaoli Wang","doi":"10.1109/TVCG.2024.3456342","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456342","url":null,"abstract":"<p><p>In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1109/TVCG.2024.3456203
Alexander Achberger;Patrick Gebhardt;Michael Sedlmair
Previous research has shown that integrating haptic feedback can improve immersion and realism in automotive VR applications. However, current haptic feedback approaches primarily focus on a single feedback type. This means users must switch between devices to experience haptic stimuli for different feedback types, such as grabbing, collision, or weight simulation. This restriction limits the ability to simulate haptics realistically for complex tasks such as maintenance. To address this issue, we evaluated existing feedback devices based on our requirements analysis to determine which devices are most suitable for simulating these three feedback types. Since no suitable haptic feedback system can simulate all three feedback types simultaneously, we evaluated which devices can be combined. Based on that, we devised a new multi-type haptic feedback system combining three haptic feedback devices. We evaluated the system with different feedback-type combinations through a qualitative expert study involving twelve automotive VR experts. The results showed that combining weight and collision feedback yielded the best and most realistic experience. The study also highlighted technical limitations in current grabbing devices. Our findings provide insights into the effectiveness of haptic device combinations and practical boundaries for automotive virtual reality tasks.
{"title":"An Exploratory Expert-Study for Multi-Type Haptic Feedback for Automotive Virtual Reality Tasks","authors":"Alexander Achberger;Patrick Gebhardt;Michael Sedlmair","doi":"10.1109/TVCG.2024.3456203","DOIUrl":"10.1109/TVCG.2024.3456203","url":null,"abstract":"Previous research has shown that integrating haptic feedback can improve immersion and realism in automotive VR applications. However, current haptic feedback approaches primarily focus on a single feedback type. This means users must switch between devices to experience haptic stimuli for different feedback types, such as grabbing, collision, or weight simulation. This restriction limits the ability to simulate haptics realistically for complex tasks such as maintenance. To address this issue, we evaluated existing feedback devices based on our requirements analysis to determine which devices are most suitable for simulating these three feedback types. Since no suitable haptic feedback system can simulate all three feedback types simultaneously, we evaluated which devices can be combined. Based on that, we devised a new multi-type haptic feedback system combining three haptic feedback devices. We evaluated the system with different feedback-type combinations through a qualitative expert study involving twelve automotive VR experts. The results showed that combining weight and collision feedback yielded the best and most realistic experience. The study also highlighted technical limitations in current grabbing devices. Our findings provide insights into the effectiveness of haptic device combinations and practical boundaries for automotive virtual reality tasks.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7255-7265"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1109/TVCG.2024.3456162
Chaochao Liu;Shining Ma;Yue Liu;Yongtian Wang;Weitao Song
The immersive augmented reality (AR) system necessitates precise depth registration between virtual objects and the real scene. Prior studies have emphasized the efficacy of surface texture in providing depth cues to enhance depth perception across various media, including the real scene, virtual reality, and AR. However, these studies predominantly focus on black-and-white textures, leaving a gap in understanding the effectiveness of colored textures. To address this gap and further explore texture-related factors in AR, a series of experiments were conducted to investigate the effects of different texture cues on depth perception using the perceptual matching method. Findings indicate that the absolute depth error increases with decreasing contrast under black-and-white texture. Moreover, textures with higher color contrast also contribute to enhanced accuracy of depth judgments in AR. However, no significant effect of texture density on depth perception was observed. The findings serve as a theoretical reference for texture design in AR, aiding in the optimization of virtual-real registration processes.
沉浸式增强现实(AR)系统需要在虚拟物体和真实场景之间进行精确的深度注册。先前的研究强调了表面纹理在提供深度线索方面的功效,以增强各种媒体(包括真实场景、虚拟现实和 AR)的深度感知。为了弥补这一不足并进一步探索 AR 中与纹理相关的因素,我们进行了一系列实验,使用知觉匹配法研究不同纹理线索对深度知觉的影响。研究结果表明,在黑白纹理下,绝对深度误差会随着对比度的降低而增加。此外,色彩对比度较高的纹理也有助于提高 AR 深度判断的准确性。然而,纹理密度对深度知觉没有明显影响。这些发现为 AR 中的纹理设计提供了理论参考,有助于优化虚拟现实注册过程。
{"title":"Depth Perception in Optical See-Through Augmented Reality: Investigating the Impact of Texture Density, Luminance Contrast, and Color Contrast","authors":"Chaochao Liu;Shining Ma;Yue Liu;Yongtian Wang;Weitao Song","doi":"10.1109/TVCG.2024.3456162","DOIUrl":"10.1109/TVCG.2024.3456162","url":null,"abstract":"The immersive augmented reality (AR) system necessitates precise depth registration between virtual objects and the real scene. Prior studies have emphasized the efficacy of surface texture in providing depth cues to enhance depth perception across various media, including the real scene, virtual reality, and AR. However, these studies predominantly focus on black-and-white textures, leaving a gap in understanding the effectiveness of colored textures. To address this gap and further explore texture-related factors in AR, a series of experiments were conducted to investigate the effects of different texture cues on depth perception using the perceptual matching method. Findings indicate that the absolute depth error increases with decreasing contrast under black-and-white texture. Moreover, textures with higher color contrast also contribute to enhanced accuracy of depth judgments in AR. However, no significant effect of texture density on depth perception was observed. The findings serve as a theoretical reference for texture design in AR, aiding in the optimization of virtual-real registration processes.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7266-7276"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1109/TVCG.2024.3456204
Christoph Ebner;Alexander Plopski;Dieter Schmalstieg;Denis Kalkofen
The vergence-accommodation conflict (VAC) presents a major perceptual challenge for head-mounted displays with a fixed image plane. Varifocal and layered display designs can mitigate the VAC. However, the image quality of varifocal displays is affected by imprecise eye tracking, whereas layered displays suffer from reduced image contrast as the distance between layers increases. Combined designs support a larger workspace and tolerate some eye-tracking error. However, any layered design with a fixed layer spacing restricts the amount of error compensation and limits the in-focus contrast. We extend previous hybrid designs by introducing confidence-driven volume control, which adjusts the size of the view volume at runtime. We use the eye tracker's confidence to control the spacing of display layers and optimize the trade-off between the display's view volume and the amount of eye tracking error the display can compensate. In the case of high-quality focus point estimation, our approach provides high in-focus contrast, whereas low-quality eye tracking increases the view volume to tolerate the error. We describe our design, present its implementation as an optical-see head-mounted display using a multiplicative layer combination, and present an evaluation comparing our design with previous approaches.
{"title":"Gaze-Contingent Layered Optical See-Through Displays with a Confidence-Driven View Volume","authors":"Christoph Ebner;Alexander Plopski;Dieter Schmalstieg;Denis Kalkofen","doi":"10.1109/TVCG.2024.3456204","DOIUrl":"10.1109/TVCG.2024.3456204","url":null,"abstract":"The vergence-accommodation conflict (VAC) presents a major perceptual challenge for head-mounted displays with a fixed image plane. Varifocal and layered display designs can mitigate the VAC. However, the image quality of varifocal displays is affected by imprecise eye tracking, whereas layered displays suffer from reduced image contrast as the distance between layers increases. Combined designs support a larger workspace and tolerate some eye-tracking error. However, any layered design with a fixed layer spacing restricts the amount of error compensation and limits the in-focus contrast. We extend previous hybrid designs by introducing confidence-driven volume control, which adjusts the size of the view volume at runtime. We use the eye tracker's confidence to control the spacing of display layers and optimize the trade-off between the display's view volume and the amount of eye tracking error the display can compensate. In the case of high-quality focus point estimation, our approach provides high in-focus contrast, whereas low-quality eye tracking increases the view volume to tolerate the error. We describe our design, present its implementation as an optical-see head-mounted display using a multiplicative layer combination, and present an evaluation comparing our design with previous approaches.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7203-7213"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10670426","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steering is a fundamental task in interactive Virtual Reality (VR) systems. Prior work has demonstrated that movement direction can significantly influence user behavior in the steering task, and different interactive environments (VEs) can lead to various behavioral patterns, such as tablets and PCs. However, its impact on VR environments remains unexplored. Given the widespread use of steering tasks in VEs, including menu adjustment and object manipulation, this work seeks to understand and model the directional effect with a focus on barehand interaction, which is typical in VEs. This paper presents the results of two studies. The first study was conducted to collect behavioral data with four categories: movement time, average movement speed, success rate, and reenter times. According to the results, we examined the effect of movement direction and built the SθModel. We then empirically evaluated the model through the data collected from the first study. The results proved that our proposed model achieved the best performance across all the metrics (r2 > 0.95), with more than 15% improvement over the original Steering Law in terms of prediction accuracy. Next, we further validated the SθModel by another study with the change of device and steering direction. Consistent with previous assessments, the model continues to exhibit optimal performance in both predicting movement time and speed. Finally, based on the results, we formulated design recommendations for steering tasks in VEs to enhance user experience and interaction efficiency.
转向是交互式虚拟现实(VR)系统中的一项基本任务。先前的研究表明,移动方向会显著影响用户在转向任务中的行为,不同的交互环境(VE)会导致不同的行为模式,如平板电脑和个人电脑。然而,其对 VR 环境的影响仍有待探索。鉴于转向任务(包括菜单调整和物体操作)在 VE 中的广泛应用,本研究试图了解方向效应并建立模型,重点关注 VE 中典型的徒手交互。本文介绍了两项研究的结果。第一项研究收集了四类行为数据:移动时间、平均移动速度、成功率和重新输入时间。根据结果,我们研究了移动方向的影响,并建立了 Sθ 模型。然后,我们通过第一次研究收集的数据对模型进行了实证评估。结果证明,我们提出的模型在所有指标上都达到了最佳性能(r2 > 0.95),在预测准确率方面比原来的转向法提高了 15%以上。接下来,我们通过另一项改变设备和转向方向的研究进一步验证了 Sθ 模型。与之前的评估结果一致,该模型在预测运动时间和速度方面继续表现出最佳性能。最后,根据研究结果,我们为虚拟现实设备中的转向任务提出了设计建议,以提高用户体验和交互效率。
{"title":"Exploring and Modeling Directional Effects on Steering Behavior in Virtual Reality","authors":"Yushi Wei;Kemu Xu;Yue Li;Lingyun Yu;Hai-Ning Liang","doi":"10.1109/TVCG.2024.3456166","DOIUrl":"10.1109/TVCG.2024.3456166","url":null,"abstract":"Steering is a fundamental task in interactive Virtual Reality (VR) systems. Prior work has demonstrated that movement direction can significantly influence user behavior in the steering task, and different interactive environments (VEs) can lead to various behavioral patterns, such as tablets and PCs. However, its impact on VR environments remains unexplored. Given the widespread use of steering tasks in VEs, including menu adjustment and object manipulation, this work seeks to understand and model the directional effect with a focus on barehand interaction, which is typical in VEs. This paper presents the results of two studies. The first study was conducted to collect behavioral data with four categories: movement time, average movement speed, success rate, and reenter times. According to the results, we examined the effect of movement direction and built the SθModel. We then empirically evaluated the model through the data collected from the first study. The results proved that our proposed model achieved the best performance across all the metrics (r2 > 0.95), with more than 15% improvement over the original Steering Law in terms of prediction accuracy. Next, we further validated the SθModel by another study with the change of device and steering direction. Consistent with previous assessments, the model continues to exhibit optimal performance in both predicting movement time and speed. Finally, based on the results, we formulated design recommendations for steering tasks in VEs to enhance user experience and interaction efficiency.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7107-7117"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1109/TVCG.2024.3456153
Pavel Manakhov;Ludwig Sidenmark;Ken Pfeuffer;Hans Gellersen
Eye tracking filters have been shown to improve accuracy of gaze estimation and input for stationary settings. However, their effectiveness during physical movement remains underexplored. In this work, we compare common online filters in the context of physical locomotion in extended reality and propose alterations to improve them for on-the-go settings. We conducted a computational experiment where we simulate performance of the online filters using data on participants attending visual targets located in world-, path-, and two head-based reference frames while standing, walking, and jogging. Our results provide insights into the filters' effectiveness and factors that affect it, such as the amount of noise caused by locomotion and differences in compensatory eye movements, and demonstrate that filters with saccade detection prove most useful for on-the-go settings. We discuss the implications of our findings and conclude with guidance on gaze data filtering for interaction in extended reality.
{"title":"Filtering on the Go: Effect of Filters on Gaze Pointing Accuracy During Physical Locomotion in Extended Reality","authors":"Pavel Manakhov;Ludwig Sidenmark;Ken Pfeuffer;Hans Gellersen","doi":"10.1109/TVCG.2024.3456153","DOIUrl":"10.1109/TVCG.2024.3456153","url":null,"abstract":"Eye tracking filters have been shown to improve accuracy of gaze estimation and input for stationary settings. However, their effectiveness during physical movement remains underexplored. In this work, we compare common online filters in the context of physical locomotion in extended reality and propose alterations to improve them for on-the-go settings. We conducted a computational experiment where we simulate performance of the online filters using data on participants attending visual targets located in world-, path-, and two head-based reference frames while standing, walking, and jogging. Our results provide insights into the filters' effectiveness and factors that affect it, such as the amount of noise caused by locomotion and differences in compensatory eye movements, and demonstrate that filters with saccade detection prove most useful for on-the-go settings. We discuss the implications of our findings and conclude with guidance on gaze data filtering for interaction in extended reality.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7234-7244"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}