首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
Personalized Dual-Level Color Grading for 360-degree Images in Virtual Reality.
Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3549886
Lin-Ping Yuan, John J Dudley, Per Ola Kristensson, Huamin Qu

The rising popularity of 360-degree images and virtual reality (VR) has spurred a growing interest among creators in producing visually appealing content through effective color grading processes. Although existing computational approaches have simplified the global color adjustment for entire images with Preferential Bayesian Optimization (PBO), they neglect local colors for points of interest and are not optimized for the immersive nature of VR. In response, we propose a dual-level PBO framework that integrates global and local color adjustments tailored for VR environments. We design and evaluate a novel context-aware preferential Gaussian Process (GP) to learn contextual preferences for local colors, taking into account the dynamic contexts of previously established global colors. Additionally, recognizing the limitations of desktop-based interfaces for comparing 360-degree images, we design three VR interfaces for color comparison. We conduct a controlled user study to investigate the effectiveness of the three VR interface designs and find that users prefer to be enveloped by one 360-degree image at a time and to compare two rather than four color-graded options.

{"title":"Personalized Dual-Level Color Grading for 360-degree Images in Virtual Reality.","authors":"Lin-Ping Yuan, John J Dudley, Per Ola Kristensson, Huamin Qu","doi":"10.1109/TVCG.2025.3549886","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549886","url":null,"abstract":"<p><p>The rising popularity of 360-degree images and virtual reality (VR) has spurred a growing interest among creators in producing visually appealing content through effective color grading processes. Although existing computational approaches have simplified the global color adjustment for entire images with Preferential Bayesian Optimization (PBO), they neglect local colors for points of interest and are not optimized for the immersive nature of VR. In response, we propose a dual-level PBO framework that integrates global and local color adjustments tailored for VR environments. We design and evaluate a novel context-aware preferential Gaussian Process (GP) to learn contextual preferences for local colors, taking into account the dynamic contexts of previously established global colors. Additionally, recognizing the limitations of desktop-based interfaces for comparing 360-degree images, we design three VR interfaces for color comparison. We conduct a controlled user study to investigate the effectiveness of the three VR interface designs and find that users prefer to be enveloped by one 360-degree image at a time and to compare two rather than four color-graded options.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TextIR: A Simple Framework for Text-based Editable Image Restoration.
Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3550844
Yunpeng Bai, Cairong Wang, Shuzhao Xie, Chao Dong, Chun Yuan, Zhi Wang

Many current image restoration approaches utilize neural networks to acquire robust image-level priors from extensive datasets, aiming to reconstruct missing details. Nevertheless, these methods often falter with images that exhibit significant information gaps. While incorporating external priors or leveraging reference images can provide supplemental information, these strategies are limited in their practical scope. Alternatively, textual inputs offer greater accessibility and adaptability. In this study, we develop a sophisticated framework enabling users to guide the restoration of deteriorated images via textual descriptions. Utilizing the text-image compatibility feature of CLIP enhances the integration of textual and visual data. Our versatile framework supports multiple restoration activities such as image inpainting, super-resolution, and colorization. Comprehensive testing validates our technique's efficacy.

{"title":"TextIR: A Simple Framework for Text-based Editable Image Restoration.","authors":"Yunpeng Bai, Cairong Wang, Shuzhao Xie, Chao Dong, Chun Yuan, Zhi Wang","doi":"10.1109/TVCG.2025.3550844","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3550844","url":null,"abstract":"<p><p>Many current image restoration approaches utilize neural networks to acquire robust image-level priors from extensive datasets, aiming to reconstruct missing details. Nevertheless, these methods often falter with images that exhibit significant information gaps. While incorporating external priors or leveraging reference images can provide supplemental information, these strategies are limited in their practical scope. Alternatively, textual inputs offer greater accessibility and adaptability. In this study, we develop a sophisticated framework enabling users to guide the restoration of deteriorated images via textual descriptions. Utilizing the text-image compatibility feature of CLIP enhances the integration of textual and visual data. Our versatile framework supports multiple restoration activities such as image inpainting, super-resolution, and colorization. Comprehensive testing validates our technique's efficacy.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reaction Time as a Proxy for Presence in Mixed Reality with Distraction.
Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3549575
Yasra Chandio, Victoria Interrante, Fatima M Anwar

Distractions in mixed reality (MR) environments can significantly influence user experience, affecting key factors such as presence, reaction time, cognitive load, and Break in Presence (BIP). Presence measures immersion, reaction time captures user responsiveness, cognitive load reflects mental effort, and BIP represents moments when attention shifts from the virtual to the real world, breaking immersion. While prior work has established that distractions impact these factors individually, the relationship between these constructs remains underexplored, particularly in MR environments where users engage with both real and virtual stimuli. To address this gap, we have presented a theoretical model to understand how congruent and incongruent distractions affect all these constructs. We conducted a within-subject study (N = 54) where participants performed image-sorting tasks under different distraction conditions. Our findings show that incongruent distractions significantly increase cognitive load, slow reaction times, and elevate BIP frequency, with presence mediating these effects.

{"title":"Reaction Time as a Proxy for Presence in Mixed Reality with Distraction.","authors":"Yasra Chandio, Victoria Interrante, Fatima M Anwar","doi":"10.1109/TVCG.2025.3549575","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549575","url":null,"abstract":"<p><p>Distractions in mixed reality (MR) environments can significantly influence user experience, affecting key factors such as presence, reaction time, cognitive load, and Break in Presence (BIP). Presence measures immersion, reaction time captures user responsiveness, cognitive load reflects mental effort, and BIP represents moments when attention shifts from the virtual to the real world, breaking immersion. While prior work has established that distractions impact these factors individually, the relationship between these constructs remains underexplored, particularly in MR environments where users engage with both real and virtual stimuli. To address this gap, we have presented a theoretical model to understand how congruent and incongruent distractions affect all these constructs. We conducted a within-subject study (N = 54) where participants performed image-sorting tasks under different distraction conditions. Our findings show that incongruent distractions significantly increase cognitive load, slow reaction times, and elevate BIP frequency, with presence mediating these effects.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimalism or Creative Chaos? On the Arrangement and Analysis of Numerous Scatterplots in Immersive 3D Knowledge Spaces.
Pub Date : 2025-03-12 DOI: 10.1109/TVCG.2025.3549546
Melanie Derksen, Torsten Kuhlen, Mario Botsch, Tim Weissker

Working with scatterplots is a classic everyday task for data analysts, which gets increasingly complex the more plots are required to form an understanding of the underlying data. To help analysts retrieve relevant plots more quickly when they are needed, immersive virtual environments (iVEs) provide them with the option to freely arrange scatterplots in the 3D space around them. In this paper, we investigate the impact of different virtual environments on the users' ability to quickly find and retrieve individual scatterplots from a larger collection. We tested three different scenarios, all having in common that users were able to position the plots freely in space according to their own needs, but each providing them with varying numbers of landmarks serving as visual cues: an Empty scene as a baseline condition, a single landmark condition with one prominent visual cue being a Desk, and a multiple landmarks condition being a virtual Office. Results from a between-subject investigation with 45 participants indicate that the time and effort users invest in arranging their plots within an iVE had a greater impact on memory performance than the design of the iVE itself. We report on the individual arrangement strategies that participants used to solve the task effectively and underline the importance of an active arrangement phase for supporting the spatial memorization of scatterplots in iVEs.

{"title":"Minimalism or Creative Chaos? On the Arrangement and Analysis of Numerous Scatterplots in Immersive 3D Knowledge Spaces.","authors":"Melanie Derksen, Torsten Kuhlen, Mario Botsch, Tim Weissker","doi":"10.1109/TVCG.2025.3549546","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549546","url":null,"abstract":"<p><p>Working with scatterplots is a classic everyday task for data analysts, which gets increasingly complex the more plots are required to form an understanding of the underlying data. To help analysts retrieve relevant plots more quickly when they are needed, immersive virtual environments (iVEs) provide them with the option to freely arrange scatterplots in the 3D space around them. In this paper, we investigate the impact of different virtual environments on the users' ability to quickly find and retrieve individual scatterplots from a larger collection. We tested three different scenarios, all having in common that users were able to position the plots freely in space according to their own needs, but each providing them with varying numbers of landmarks serving as visual cues: an Empty scene as a baseline condition, a single landmark condition with one prominent visual cue being a Desk, and a multiple landmarks condition being a virtual Office. Results from a between-subject investigation with 45 participants indicate that the time and effort users invest in arranging their plots within an iVE had a greater impact on memory performance than the design of the iVE itself. We report on the individual arrangement strategies that participants used to solve the task effectively and underline the importance of an active arrangement phase for supporting the spatial memorization of scatterplots in iVEs.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Display to Interaction: Design Patterns for Cross-Reality Systems.
Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549893
Robbe Cools, Jihae Han, Augusto Esteves, Adalberto L Simeone

Cross-reality is an emerging research area concerned with systems operating across different points on the reality-virtuality continuum. These systems are often complex, involving multiple realities and users, and thus there is a need for an overarching design framework, which, despite growing interest has yet to be developed. This paper addresses this need by presenting eleven design patterns for cross-reality applications across the following four categories: fundamental, origin, display, and interaction patterns. To develop these design patterns we analysed a sample of 60 papers, with the goal of identifying recurring solutions. These patterns were then described in form of intent, solution, and application examples, accompanied by a diagram and archetypal example. This paper provides designers with a comprehensive set of patterns that they can use and draw inspiration from when creating cross-reality systems.

{"title":"From Display to Interaction: Design Patterns for Cross-Reality Systems.","authors":"Robbe Cools, Jihae Han, Augusto Esteves, Adalberto L Simeone","doi":"10.1109/TVCG.2025.3549893","DOIUrl":"10.1109/TVCG.2025.3549893","url":null,"abstract":"<p><p>Cross-reality is an emerging research area concerned with systems operating across different points on the reality-virtuality continuum. These systems are often complex, involving multiple realities and users, and thus there is a need for an overarching design framework, which, despite growing interest has yet to be developed. This paper addresses this need by presenting eleven design patterns for cross-reality applications across the following four categories: fundamental, origin, display, and interaction patterns. To develop these design patterns we analysed a sample of 60 papers, with the goal of identifying recurring solutions. These patterns were then described in form of intent, solution, and application examples, accompanied by a diagram and archetypal example. This paper provides designers with a comprehensive set of patterns that they can use and draw inspiration from when creating cross-reality systems.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FocalSelect: Improving Occluded Objects Acquisition with Heuristic Selection and Disambiguation in Virtual Reality. FocalSelect:利用虚拟现实中的启发式选择和消歧技术改进隐蔽物体的获取。
Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549554
Duotun Wang, Linjie Qiu, Boyu Li, Qianxi Liu, Xiaoying Wei, Jianhao Chen, Zeyu Wang, Mingming Fan

In recent years, various head-worn virtual reality (VR) techniques have emerged to enhance object selection for occluded or distant targets. However, many approaches focus solely on ray-casting inputs, restricting their use with other input methods, such as bare hands. Additionally, some techniques speed up selection by changing the user's perspective or modifying the scene context, which may complicate interactions when users plan to resume or manipulate the scene afterward. To address these challenges, we present FocalSelect, a heuristic selection technique that builds 3D disambiguation through head-hand coordination and scoring-based functions. Our interaction design adheres to the principle that the intended selection range is a small sector of the headset's viewing frustum, allowing optimal targets to be identified within this scope. We also introduce a density-aware adjustable occlusion plane for effective depth culling of rendered objects. Two experiments are conducted to assess the adaptability of FocalSelect across different input modalities and its performance against five selection techniques. The results indicate that FocalSelect enhances selection experiences in occluded and remote scenarios while preserving the spatial context among objects. This preservation helps maintain users' understanding of the original scene and facilitates further manipulation. We also explore potential applications and enhancements to demonstrate more practical implementations of FocalSelect.

{"title":"FocalSelect: Improving Occluded Objects Acquisition with Heuristic Selection and Disambiguation in Virtual Reality.","authors":"Duotun Wang, Linjie Qiu, Boyu Li, Qianxi Liu, Xiaoying Wei, Jianhao Chen, Zeyu Wang, Mingming Fan","doi":"10.1109/TVCG.2025.3549554","DOIUrl":"10.1109/TVCG.2025.3549554","url":null,"abstract":"<p><p>In recent years, various head-worn virtual reality (VR) techniques have emerged to enhance object selection for occluded or distant targets. However, many approaches focus solely on ray-casting inputs, restricting their use with other input methods, such as bare hands. Additionally, some techniques speed up selection by changing the user's perspective or modifying the scene context, which may complicate interactions when users plan to resume or manipulate the scene afterward. To address these challenges, we present FocalSelect, a heuristic selection technique that builds 3D disambiguation through head-hand coordination and scoring-based functions. Our interaction design adheres to the principle that the intended selection range is a small sector of the headset's viewing frustum, allowing optimal targets to be identified within this scope. We also introduce a density-aware adjustable occlusion plane for effective depth culling of rendered objects. Two experiments are conducted to assess the adaptability of FocalSelect across different input modalities and its performance against five selection techniques. The results indicate that FocalSelect enhances selection experiences in occluded and remote scenarios while preserving the spatial context among objects. This preservation helps maintain users' understanding of the original scene and facilitates further manipulation. We also explore potential applications and enhancements to demonstrate more practical implementations of FocalSelect.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Early Warning System Based on Visual Feedback for Light-Based Hand Tracking Failures in VR Head-Mounted Displays.
Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549544
Mohammad Raihanul Bashar, Anil Ufuk Batmaz

State-of-the-art Virtual Reality (VR) Head-Mounted Displays (HMDs) enable users to interact with virtual objects using their hands via built-in camera systems. However, the accuracy of the hand movement detection algorithm is often affected by limitations in both camera hardware and software, including image processing & machine learning algorithms used for hand skeleton detection. In this work, we investigated a visual feedback mechanism to create an early warning system that detects hand skeleton recognition failures in VR HMDs and warns users in advance. We conducted two user studies to evaluate the system's effectiveness. The first study involved a cup stacking task, where participants stacked virtual cups. In the second study, participants performed a ball sorting task, picking and placing colored balls into corresponding baskets. During both of the studies, we monitored the built-in hand tracking confidence of the VR HMD system and provided visual feedback to the user to warn them when the tracking confidence is 'low'. The results showed that warning users before the hand tracking algorithm fails improved the system's usability while reducing frustration. The impact of our results extends beyond VR HMDs, any system that uses hand tracking, such as robotics, can benefit from this approach.

{"title":"An Early Warning System Based on Visual Feedback for Light-Based Hand Tracking Failures in VR Head-Mounted Displays.","authors":"Mohammad Raihanul Bashar, Anil Ufuk Batmaz","doi":"10.1109/TVCG.2025.3549544","DOIUrl":"10.1109/TVCG.2025.3549544","url":null,"abstract":"<p><p>State-of-the-art Virtual Reality (VR) Head-Mounted Displays (HMDs) enable users to interact with virtual objects using their hands via built-in camera systems. However, the accuracy of the hand movement detection algorithm is often affected by limitations in both camera hardware and software, including image processing & machine learning algorithms used for hand skeleton detection. In this work, we investigated a visual feedback mechanism to create an early warning system that detects hand skeleton recognition failures in VR HMDs and warns users in advance. We conducted two user studies to evaluate the system's effectiveness. The first study involved a cup stacking task, where participants stacked virtual cups. In the second study, participants performed a ball sorting task, picking and placing colored balls into corresponding baskets. During both of the studies, we monitored the built-in hand tracking confidence of the VR HMD system and provided visual feedback to the user to warn them when the tracking confidence is 'low'. The results showed that warning users before the hand tracking algorithm fails improved the system's usability while reducing frustration. The impact of our results extends beyond VR HMDs, any system that uses hand tracking, such as robotics, can benefit from this approach.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TeamPortal: Exploring Virtual Reality Collaboration Through Shared and Manipulating Parallel Views.
Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549569
Xian Wang, Luyao Shen, Lei Chen, Mingming Fan, Lik-Hang Lee

Virtual Reality (VR) offers a unique collaborative experience, with parallel views playing a pivotal role in Collaborative Virtual Environments by supporting the transfer and delivery of items. Sharing and manipulating partners' views provides users with a broader perspective that helps them identify the targets and partner actions. We proposed TeamPortal accordingly and conducted two user studies with 72 participants (36 pairs) to investigate the potential benefits of interactive, shared perspectives in VR collaboration. Our first study compared ShaView and TeamPortal against a baseline in a collaborative task that encompassed a series of searching and manipulation tasks. The results show that TeamPortal significantly reduced movement and increased collaborative efficiency and social presence in complex tasks. Following the results, the second study evaluated three variants: TeamPortal+, SnapTeamPortal+, and DropTeamPortal+. The results show that both SnapTeamPortal+ and DropTeamPortal+ improved task efficiency and willingness to further adopt these technologies, though SnapTeamPortal+ reduced co-presence. Based on the findings, we proposed three design implications to inform the development of future VR collaboration systems.

{"title":"TeamPortal: Exploring Virtual Reality Collaboration Through Shared and Manipulating Parallel Views.","authors":"Xian Wang, Luyao Shen, Lei Chen, Mingming Fan, Lik-Hang Lee","doi":"10.1109/TVCG.2025.3549569","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549569","url":null,"abstract":"<p><p>Virtual Reality (VR) offers a unique collaborative experience, with parallel views playing a pivotal role in Collaborative Virtual Environments by supporting the transfer and delivery of items. Sharing and manipulating partners' views provides users with a broader perspective that helps them identify the targets and partner actions. We proposed TeamPortal accordingly and conducted two user studies with 72 participants (36 pairs) to investigate the potential benefits of interactive, shared perspectives in VR collaboration. Our first study compared ShaView and TeamPortal against a baseline in a collaborative task that encompassed a series of searching and manipulation tasks. The results show that TeamPortal significantly reduced movement and increased collaborative efficiency and social presence in complex tasks. Following the results, the second study evaluated three variants: TeamPortal+, SnapTeamPortal+, and DropTeamPortal+. The results show that both SnapTeamPortal+ and DropTeamPortal+ improved task efficiency and willingness to further adopt these technologies, though SnapTeamPortal+ reduced co-presence. Based on the findings, we proposed three design implications to inform the development of future VR collaboration systems.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"One Body, but Four Hands": Exploring the Role of Virtual Hands in Virtual Co-embodiment. "一个身体,四只手":探索虚拟手在虚拟共生中的作用。
Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549883
Jingjing Zhang, Xiyao Jin, Han Tu, Hai-Ning Liang, Zhuying Li, Xin Tong

Virtual co-embodiment in virtual reality (VR) allows two users to share an avatar, enabling skill transfer from teachers to learners and influencing their Sense of Ownership (SoO) and Sense of Agency (SoA). However, mismatches between actual movements and displayed actions in VR can impair user experience, posing challenges to learning effectiveness. Although previous studies have addressed the influence of virtual bodies' visual factors on SoO and SoA, the impact of co-embodied hands' appearances remains underexplored. We conducted two user studies to examine the effects of virtual self-hands' existence and their visual factors (transparency and congruency) on SoO, SoA, and social presence. Study One showed significant improvements in SoO and SoA with the existence of virtual self-hands. In Study Two, we kept the self-hands and further focused on hand transparency and congruency. We found that identical appearances between self-hands and co-embodied hands significantly enhanced SoO. These findings stressed the importance of visual factors for virtual hands, offering valuable insights for VR co-embodiment design.

{"title":"\"One Body, but Four Hands\": Exploring the Role of Virtual Hands in Virtual Co-embodiment.","authors":"Jingjing Zhang, Xiyao Jin, Han Tu, Hai-Ning Liang, Zhuying Li, Xin Tong","doi":"10.1109/TVCG.2025.3549883","DOIUrl":"10.1109/TVCG.2025.3549883","url":null,"abstract":"<p><p>Virtual co-embodiment in virtual reality (VR) allows two users to share an avatar, enabling skill transfer from teachers to learners and influencing their Sense of Ownership (SoO) and Sense of Agency (SoA). However, mismatches between actual movements and displayed actions in VR can impair user experience, posing challenges to learning effectiveness. Although previous studies have addressed the influence of virtual bodies' visual factors on SoO and SoA, the impact of co-embodied hands' appearances remains underexplored. We conducted two user studies to examine the effects of virtual self-hands' existence and their visual factors (transparency and congruency) on SoO, SoA, and social presence. Study One showed significant improvements in SoO and SoA with the existence of virtual self-hands. In Study Two, we kept the self-hands and further focused on hand transparency and congruency. We found that identical appearances between self-hands and co-embodied hands significantly enhanced SoO. These findings stressed the importance of visual factors for virtual hands, offering valuable insights for VR co-embodiment design.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BoundaryScreen: Summoning the Home Screen in VR via Walking Outward.
Pub Date : 2025-03-11 DOI: 10.1109/TVCG.2025.3549536
Yang Tian, Xingjia Hao, Jianchun Su, Wei Sun, Yangjian Pan, Yunhai Wang, Minghui Sun, Teng Han, Ningjiang Chen

A safety boundary wall in VR is a virtual barrier that defines a safe area, allowing users to navigate and interact without safety concerns. However, existing implementations neglect to utilize the safety boundary wall's large surface for displaying interactive information. In this work, we propose the BoundaryScreen technique based on the "walking outward" metaphor to add interactivity to the safety boundary wall. Specifically, we augment the safety boundary wall by placing the home screen on it. To summon the home screen, the user only needs to walk outward until it appears. Results showed that (i) participants significantly preferred BoundaryScreen in the outermost two-step-wide ring-shaped section of a circular safety area; and (ii) participants exhibited strong "behavioral inertia" for walking, i.e., after completing a routine activity involving constant walking, participants significantly preferred to use the walking-based BoundaryScreen technique to summon the home screen.

{"title":"BoundaryScreen: Summoning the Home Screen in VR via Walking Outward.","authors":"Yang Tian, Xingjia Hao, Jianchun Su, Wei Sun, Yangjian Pan, Yunhai Wang, Minghui Sun, Teng Han, Ningjiang Chen","doi":"10.1109/TVCG.2025.3549536","DOIUrl":"10.1109/TVCG.2025.3549536","url":null,"abstract":"<p><p>A safety boundary wall in VR is a virtual barrier that defines a safe area, allowing users to navigate and interact without safety concerns. However, existing implementations neglect to utilize the safety boundary wall's large surface for displaying interactive information. In this work, we propose the BoundaryScreen technique based on the \"walking outward\" metaphor to add interactivity to the safety boundary wall. Specifically, we augment the safety boundary wall by placing the home screen on it. To summon the home screen, the user only needs to walk outward until it appears. Results showed that (i) participants significantly preferred BoundaryScreen in the outermost two-step-wide ring-shaped section of a circular safety area; and (ii) participants exhibited strong \"behavioral inertia\" for walking, i.e., after completing a routine activity involving constant walking, participants significantly preferred to use the walking-based BoundaryScreen technique to summon the home screen.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1