首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
Peripheral Teleportation: A Rest Frame Design to Mitigate Cybersickness During Virtual Locomotion.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549568
Tongyu Nie, Courtney Hutton Pospick, Ville Cantory, Danhua Zhang, Jasmine Joyce DeGuzman, Victoria Interrante, Isayas Berhe Adhanom, Evan Suma Rosenberg

Mitigating cybersickness can improve the usability of virtual reality (VR) and increase its adoption. The most widely used technique, dynamic field-of-view (FOV) restriction, mitigates cybersickness by blacking out the peripheral region of the user's FOV. However, this approach reduces the visibility of the virtual environment. We propose peripheral teleportation, a novel technique that creates a rest frame (RF) in the user's peripheral vision using content rendered from the current virtual environment. Specifically, the peripheral region is rendered by a pair of RF cameras whose transforms are updated by the user's physical motion. We apply alternating teleportations during translations, or snap turns during rotations, to the RF cameras to keep them close to the current viewpoint transformation. Consequently, the optical flow generated by RF cameras matches the user's physical motion, creating a stable peripheral view. In a between-subjects study (N=90), we compared peripheral teleportation with a traditional black FOV restrictor and an unrestricted control condition. The results showed that peripheral teleportation significantly reduced discomfort and enabled participants to stay immersed in the virtual environment for a longer duration of time. Overall, these findings suggest that peripheral teleportation is a promising technique that VR practitioners may consider adding to their cybersickness mitigation toolset.

{"title":"Peripheral Teleportation: A Rest Frame Design to Mitigate Cybersickness During Virtual Locomotion.","authors":"Tongyu Nie, Courtney Hutton Pospick, Ville Cantory, Danhua Zhang, Jasmine Joyce DeGuzman, Victoria Interrante, Isayas Berhe Adhanom, Evan Suma Rosenberg","doi":"10.1109/TVCG.2025.3549568","DOIUrl":"10.1109/TVCG.2025.3549568","url":null,"abstract":"<p><p>Mitigating cybersickness can improve the usability of virtual reality (VR) and increase its adoption. The most widely used technique, dynamic field-of-view (FOV) restriction, mitigates cybersickness by blacking out the peripheral region of the user's FOV. However, this approach reduces the visibility of the virtual environment. We propose peripheral teleportation, a novel technique that creates a rest frame (RF) in the user's peripheral vision using content rendered from the current virtual environment. Specifically, the peripheral region is rendered by a pair of RF cameras whose transforms are updated by the user's physical motion. We apply alternating teleportations during translations, or snap turns during rotations, to the RF cameras to keep them close to the current viewpoint transformation. Consequently, the optical flow generated by RF cameras matches the user's physical motion, creating a stable peripheral view. In a between-subjects study (N=90), we compared peripheral teleportation with a traditional black FOV restrictor and an unrestricted control condition. The results showed that peripheral teleportation significantly reduced discomfort and enabled participants to stay immersed in the virtual environment for a longer duration of time. Overall, these findings suggest that peripheral teleportation is a promising technique that VR practitioners may consider adding to their cybersickness mitigation toolset.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulating Social Pressure: Evaluating Risk Behaviors in Construction Using Augmented Virtuality.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549877
Shiva Pooladvand, Sogand Hasanzadeh, George Takahashi, Kenneth Jongwon Park, Jacob Marroquin

Drawing on social influence and behavioral intention theories, coworkers' risk-taking serves as an "extra motive"-an exogenous factor-for risk-taking behaviors among workers in the workplace. Social influence theories have shown that social factors, such as social pressure and coworker risk-taking, may predict risk-taking behaviors and significantly affect decision-making. While immersive technologies have been widely used to create close-to-real simulations for construction safety-related studies, there is a paucity of research considering the impact of social presence in evaluating workers' risk decision-making within immersive environments. To bridge this gap, this study developed a state-of-the-art Augmented Virtuality (AV) environment to investigate roofers' risk-taking behaviors when exposed to social stressors (working alongside a safe/unsafe peer). In this augmented virtuality environment, a virtual peer with safe and unsafe behaviors was simulated in order to impose peer pressure and increase participants' sense of social presence. Participants were asked to install asphalt shingles on a physical section of a roof (passive haptics) while the rest of the environment was projected virtually. During shingle installation, participants' cognitive and behavioral responses were captured using psychophysiological wearable technologies and self-report measures. The results demonstrated that the developed AV model could successfully enhance participants' sense of presence and social presence while serving as an appropriate platform for assessing individuals' decision-making orientations and behavioral changes in the presence of social stressors. Such information shows the value of immersive technologies to examine the naturalistic responses of individuals without exposing them to actual risks.

{"title":"Simulating Social Pressure: Evaluating Risk Behaviors in Construction Using Augmented Virtuality.","authors":"Shiva Pooladvand, Sogand Hasanzadeh, George Takahashi, Kenneth Jongwon Park, Jacob Marroquin","doi":"10.1109/TVCG.2025.3549877","DOIUrl":"10.1109/TVCG.2025.3549877","url":null,"abstract":"<p><p>Drawing on social influence and behavioral intention theories, coworkers' risk-taking serves as an \"extra motive\"-an exogenous factor-for risk-taking behaviors among workers in the workplace. Social influence theories have shown that social factors, such as social pressure and coworker risk-taking, may predict risk-taking behaviors and significantly affect decision-making. While immersive technologies have been widely used to create close-to-real simulations for construction safety-related studies, there is a paucity of research considering the impact of social presence in evaluating workers' risk decision-making within immersive environments. To bridge this gap, this study developed a state-of-the-art Augmented Virtuality (AV) environment to investigate roofers' risk-taking behaviors when exposed to social stressors (working alongside a safe/unsafe peer). In this augmented virtuality environment, a virtual peer with safe and unsafe behaviors was simulated in order to impose peer pressure and increase participants' sense of social presence. Participants were asked to install asphalt shingles on a physical section of a roof (passive haptics) while the rest of the environment was projected virtually. During shingle installation, participants' cognitive and behavioral responses were captured using psychophysiological wearable technologies and self-report measures. The results demonstrated that the developed AV model could successfully enhance participants' sense of presence and social presence while serving as an appropriate platform for assessing individuals' decision-making orientations and behavioral changes in the presence of social stressors. Such information shows the value of immersive technologies to examine the naturalistic responses of individuals without exposing them to actual risks.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection Thresholds for Replay and Real-Time Discrepancies in VR Hand Redirection.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549571
Kiyu Tanaka, Takuto Nakamura, Keigo Matsumoto, Hideaki Kuzuoka, Takuji Narumi

Hand redirection, which subtly adjusts a user's hand movements in a virtual environment, can modify perception and movement by providing real-time corrections to motor feedback. In the context of motor learning and rehabilitation, observing replays of movements has been shown to enhance motor function. The application of hand redirection to these replays by making movements appear larger or smaller than they actually are has the potential to improve motor function. However, the detection threshold for hand redirection, specifically in the context of motion replays, remains unclear, as it has primarily been studied in real-time feedback settings. This study aims to determine the threshold at which hand redirection during post-exercise replay sessions becomes detectable. We conducted two psychophysical experiments to evaluate how much discrepancy between replayed and actual movements can go unnoticed by users, both with hand redirection (N=20) and without (N=18). Our findings reveal a tendency for the amount of movement during replay to be underestimated. Furthermore, compared to conventional real-time hand redirection without replay, replay manipulations involving redirection applied during the preceding reaching task resulted in a significantly larger JND. These insights are crucial for leveraging hand redirection techniques in replay-based motor learning applications.

{"title":"Detection Thresholds for Replay and Real-Time Discrepancies in VR Hand Redirection.","authors":"Kiyu Tanaka, Takuto Nakamura, Keigo Matsumoto, Hideaki Kuzuoka, Takuji Narumi","doi":"10.1109/TVCG.2025.3549571","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549571","url":null,"abstract":"<p><p>Hand redirection, which subtly adjusts a user's hand movements in a virtual environment, can modify perception and movement by providing real-time corrections to motor feedback. In the context of motor learning and rehabilitation, observing replays of movements has been shown to enhance motor function. The application of hand redirection to these replays by making movements appear larger or smaller than they actually are has the potential to improve motor function. However, the detection threshold for hand redirection, specifically in the context of motion replays, remains unclear, as it has primarily been studied in real-time feedback settings. This study aims to determine the threshold at which hand redirection during post-exercise replay sessions becomes detectable. We conducted two psychophysical experiments to evaluate how much discrepancy between replayed and actual movements can go unnoticed by users, both with hand redirection (N=20) and without (N=18). Our findings reveal a tendency for the amount of movement during replay to be underestimated. Furthermore, compared to conventional real-time hand redirection without replay, replay manipulations involving redirection applied during the preceding reaching task resulted in a significantly larger JND. These insights are crucial for leveraging hand redirection techniques in replay-based motor learning applications.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Social Experiences in Immersive Virtual Reality with Artificial Facial Mimicry.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549163
Alessandro Visconti, Davide Calandra, Federica Giorgione, Fabrizio Lamberti

The growing availability of affordable Virtual Reality (VR) hardware and the increasing interest in the Metaverse are driving the expansion of Social VR (SVR) platforms. These platforms allow users to embody avatars in immersive social virtual environments, enabling real-time interactions using consumer devices. Beyond merely replicating real-life social dynamics, SVR platforms offer opportunities to surpass real-world constraints by augmenting these interactions. One example of such augmentation is Artificial Facial Mimicry (AFM), which holds significant potential to enhance social experiences. Mimicry, the unconscious imitation of verbal and non-verbal behaviors, has been shown to positively affect human-agent interactions, yet its role in avatar-mediated human-to-human communication remains under-explored. AFM presents various possibilities, such as amplifying emotional expressions, or substituting one emotion for another to better align with the context. Furthermore, AFM can address the limitations of current facial tracking technologies in fully capturing users' emotions. To investigate the potential benefits of AFM in SVR, an automated AM system was developed. This system provides AFM, along with other kinds of head mimicry (nodding and eye contact), and it is compatible with consumer VR devices equipped with facial tracking. This system was deployed within a test-bench immersive SVR application. A between-dyads user study was conducted to assess the potential benefits of AFM for interpersonal communication while maintaining avatar behavioral naturalness, comparing the experiences of pairs of participants communicating with AFM enabled against a baseline condition. Subjective measures revealed that AFM improved interpersonal closeness, aspects of social attraction, interpersonal trust, social presence, and naturalness compared to the baseline condition. These findings demonstrate AFM's positive impact on key aspects of social interaction and highlight its potential applications across various SVR domains.

{"title":"Enhancing Social Experiences in Immersive Virtual Reality with Artificial Facial Mimicry.","authors":"Alessandro Visconti, Davide Calandra, Federica Giorgione, Fabrizio Lamberti","doi":"10.1109/TVCG.2025.3549163","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549163","url":null,"abstract":"<p><p>The growing availability of affordable Virtual Reality (VR) hardware and the increasing interest in the Metaverse are driving the expansion of Social VR (SVR) platforms. These platforms allow users to embody avatars in immersive social virtual environments, enabling real-time interactions using consumer devices. Beyond merely replicating real-life social dynamics, SVR platforms offer opportunities to surpass real-world constraints by augmenting these interactions. One example of such augmentation is Artificial Facial Mimicry (AFM), which holds significant potential to enhance social experiences. Mimicry, the unconscious imitation of verbal and non-verbal behaviors, has been shown to positively affect human-agent interactions, yet its role in avatar-mediated human-to-human communication remains under-explored. AFM presents various possibilities, such as amplifying emotional expressions, or substituting one emotion for another to better align with the context. Furthermore, AFM can address the limitations of current facial tracking technologies in fully capturing users' emotions. To investigate the potential benefits of AFM in SVR, an automated AM system was developed. This system provides AFM, along with other kinds of head mimicry (nodding and eye contact), and it is compatible with consumer VR devices equipped with facial tracking. This system was deployed within a test-bench immersive SVR application. A between-dyads user study was conducted to assess the potential benefits of AFM for interpersonal communication while maintaining avatar behavioral naturalness, comparing the experiences of pairs of participants communicating with AFM enabled against a baseline condition. Subjective measures revealed that AFM improved interpersonal closeness, aspects of social attraction, interpersonal trust, social presence, and naturalness compared to the baseline condition. These findings demonstrate AFM's positive impact on key aspects of social interaction and highlight its potential applications across various SVR domains.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Focus-Driven Augmented Feedback: Enhancing Focus and Maintaining Engagement in Upper Limb Virtual Reality Rehabilitation.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549543
Kai-Lun Liao, Mengjie Huang, Jiajia Shi, Min Chen, Rui Yang

Integrating biofeedback technology, such as real-time eye-tracking, has revolutionized the landscape of virtual reality (VR) rehabilitation games, offering new opportunities for personalized therapy. Motivated to increase patient focus during rehabilitation, the Focus-Driven Augmented Feedback (FDAF) system was developed to enhance focus and maintain engagement during upper limb VR rehabilitation. This novel approach dynamically adjusts augmented visual feedback based on a patient's gaze, creating a personalised rehabilitation experience tailored to individual needs. This research aims to develop and comprehensively evaluate the FDAF system to enhance patient focus and maintain engagement in VR rehabilitation environments. The methodology involved three experimental studies, which tested varying levels of augmented feedback with 71 healthy participants and 17 patients requiring upper limb rehabilitation. The results demonstrated that a 30% augmented level was optimal for healthy participants, while a 20% was most effective for patients, ensuring sustained engagement without inducing discomfort. The research's findings highlight the potential of eye-tracking technology to dynamically customise feedback in VR rehabilitation, leading to more effective therapy and improved patient outcomes. This research contributes significant advancements in developing personalised VR rehabilitation techniques, offering valuable insights for future therapeutic applications.

{"title":"Focus-Driven Augmented Feedback: Enhancing Focus and Maintaining Engagement in Upper Limb Virtual Reality Rehabilitation.","authors":"Kai-Lun Liao, Mengjie Huang, Jiajia Shi, Min Chen, Rui Yang","doi":"10.1109/TVCG.2025.3549543","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549543","url":null,"abstract":"<p><p>Integrating biofeedback technology, such as real-time eye-tracking, has revolutionized the landscape of virtual reality (VR) rehabilitation games, offering new opportunities for personalized therapy. Motivated to increase patient focus during rehabilitation, the Focus-Driven Augmented Feedback (FDAF) system was developed to enhance focus and maintain engagement during upper limb VR rehabilitation. This novel approach dynamically adjusts augmented visual feedback based on a patient's gaze, creating a personalised rehabilitation experience tailored to individual needs. This research aims to develop and comprehensively evaluate the FDAF system to enhance patient focus and maintain engagement in VR rehabilitation environments. The methodology involved three experimental studies, which tested varying levels of augmented feedback with 71 healthy participants and 17 patients requiring upper limb rehabilitation. The results demonstrated that a 30% augmented level was optimal for healthy participants, while a 20% was most effective for patients, ensuring sustained engagement without inducing discomfort. The research's findings highlight the potential of eye-tracking technology to dynamically customise feedback in VR rehabilitation, leading to more effective therapy and improved patient outcomes. This research contributes significant advancements in developing personalised VR rehabilitation techniques, offering valuable insights for future therapeutic applications.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Impact of Video Pass-Through Embodiment on Presence and Performance in Virtual Reality.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549891
Kristoffer Waldow, Constantin Kleinbeck, Arnulph Fuhrmann, Daniel Roth

Creating a compelling sense of presence and embodiment can enhance the user experience in virtual reality (VR). One method to accomplish this is through self-representation with embodied personalized avatars or video self-avatars. However, these approaches require external hardware and primarily evaluate hand representations in VR across various tasks. We therefore present in this paper an alternative approach: video Pass-Through Embodiment (PTE), which utilizes the per-eye real-time depth map from Head-Mounted Displays (HMDs) traditionally used for Augmented Reality features. This method allows the user's real body to be cut out of the pass-through video stream and be represented in the VR environment without the need for additional hardware. To evaluate our approach, we conducted a between-subjects study involving 40 participants who completed a seated object sorting task using either PTE or a customized avatar. The results show that PTE, despite its limited depth resolution that leads to some visual artifacts, significantly enhances the user's sense of presence and embodiment. In addition, PTE does not negatively affect task performance, cognitive load, or cause VR sickness. These findings imply that video pass-through embodiment offers a practical and efficient alternative to traditional avatar-based methods in VR.

{"title":"Investigating the Impact of Video Pass-Through Embodiment on Presence and Performance in Virtual Reality.","authors":"Kristoffer Waldow, Constantin Kleinbeck, Arnulph Fuhrmann, Daniel Roth","doi":"10.1109/TVCG.2025.3549891","DOIUrl":"10.1109/TVCG.2025.3549891","url":null,"abstract":"<p><p>Creating a compelling sense of presence and embodiment can enhance the user experience in virtual reality (VR). One method to accomplish this is through self-representation with embodied personalized avatars or video self-avatars. However, these approaches require external hardware and primarily evaluate hand representations in VR across various tasks. We therefore present in this paper an alternative approach: video Pass-Through Embodiment (PTE), which utilizes the per-eye real-time depth map from Head-Mounted Displays (HMDs) traditionally used for Augmented Reality features. This method allows the user's real body to be cut out of the pass-through video stream and be represented in the VR environment without the need for additional hardware. To evaluate our approach, we conducted a between-subjects study involving 40 participants who completed a seated object sorting task using either PTE or a customized avatar. The results show that PTE, despite its limited depth resolution that leads to some visual artifacts, significantly enhances the user's sense of presence and embodiment. In addition, PTE does not negatively affect task performance, cognitive load, or cause VR sickness. These findings imply that video pass-through embodiment offers a practical and efficient alternative to traditional avatar-based methods in VR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SeamlessVR: Bridging the Immersive to Non-Immersive Visualization Divide.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549564
Shuqi Liao, Sparsh Chaudhri, Maanas K Karwa, Voicu Popescu

The paper describes SeamlessVR, a method for switching effectively from immersive visualization, in a virtual reality (VR) headset, to non-immersive visualization, on screen. SeamlessVR implements a continuous morph of the 3D visualization to a 2D visualization that matches what the user will see on screen after removing the headset. This visualization continuity reduces the cognitive effort of connecting the immersive to the non-immersive visualization, helping the user continue on screen a visualization task started in the headset. We have compared SeamlessVR to the conventional approach of directly removing the headset in an IRB-approved user study with N = 30 participants. SeamlessVR had a significant advantage over the conventional approach in terms of time and accuracy for target tracking in complex abstract and realistic scenes and in terms of participants' perception of the switch from immersive to non-immersive visualization, as well as in terms of usability. SeamlessVR did not pose cybersickness concerns.

{"title":"SeamlessVR: Bridging the Immersive to Non-Immersive Visualization Divide.","authors":"Shuqi Liao, Sparsh Chaudhri, Maanas K Karwa, Voicu Popescu","doi":"10.1109/TVCG.2025.3549564","DOIUrl":"10.1109/TVCG.2025.3549564","url":null,"abstract":"<p><p>The paper describes SeamlessVR, a method for switching effectively from immersive visualization, in a virtual reality (VR) headset, to non-immersive visualization, on screen. SeamlessVR implements a continuous morph of the 3D visualization to a 2D visualization that matches what the user will see on screen after removing the headset. This visualization continuity reduces the cognitive effort of connecting the immersive to the non-immersive visualization, helping the user continue on screen a visualization task started in the headset. We have compared SeamlessVR to the conventional approach of directly removing the headset in an IRB-approved user study with N = 30 participants. SeamlessVR had a significant advantage over the conventional approach in terms of time and accuracy for target tracking in complex abstract and realistic scenes and in terms of participants' perception of the switch from immersive to non-immersive visualization, as well as in terms of usability. SeamlessVR did not pose cybersickness concerns.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
X's Day: Personality-Driven Virtual Human Behavior Generation. X's Day:人格驱动的虚拟人行为生成。
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549574
Haoyang Li, Zan Wang, Wei Liang, Yizhuo Wang

Developing convincing and realistic virtual human behavior is essential for enhancing user experiences in virtual reality (VR) and augmented reality (AR) settings. This paper introduces a novel task focused on generating long-term behaviors for virtual agents, guided by specific personality traits and contextual elements within 3D environments. We present a comprehensive framework capable of autonomously producing daily activities autoregressively. By modeling the intricate connections between personality characteristics and observable activities, we establish a hierarchical structure of Needs, Task, and Activity levels. Integrating a Behavior Planner and a World State module allows for the dynamic sampling of behaviors using large language models (LLMs), ensuring that generated activities remain relevant and responsive to environmental changes. Extensive experiments validate the effectiveness and adaptability of our approach across diverse scenarios. This research makes a significant contribution to the field by establishing a new paradigm for personalized and context-aware interactions with virtual humans, ultimately enhancing user engagement in immersive applications. Our project website is at: https://behavior.agent-x.cn/.

在虚拟现实(VR)和增强现实(AR)环境中,开发令人信服且逼真的虚拟人行为对于提升用户体验至关重要。本文介绍了一项新任务,重点是在特定个性特征和三维环境中的上下文元素指导下,为虚拟代理生成长期行为。我们提出了一个能够自动生成日常活动的综合框架。通过对个性特征和可观察活动之间错综复杂的联系进行建模,我们建立了一个由需求、任务和活动三个层次组成的分层结构。将行为规划器和世界状态模块整合在一起,可以使用大型语言模型(LLM)对行为进行动态采样,确保生成的活动与环境变化保持相关并做出响应。大量实验验证了我们的方法在不同场景下的有效性和适应性。这项研究通过建立与虚拟人进行个性化和情境感知交互的新范例,为该领域做出了重大贡献,最终提高了用户在沉浸式应用中的参与度。我们的项目网站是:https://behavior.agent-x.cn/。
{"title":"X's Day: Personality-Driven Virtual Human Behavior Generation.","authors":"Haoyang Li, Zan Wang, Wei Liang, Yizhuo Wang","doi":"10.1109/TVCG.2025.3549574","DOIUrl":"10.1109/TVCG.2025.3549574","url":null,"abstract":"<p><p>Developing convincing and realistic virtual human behavior is essential for enhancing user experiences in virtual reality (VR) and augmented reality (AR) settings. This paper introduces a novel task focused on generating long-term behaviors for virtual agents, guided by specific personality traits and contextual elements within 3D environments. We present a comprehensive framework capable of autonomously producing daily activities autoregressively. By modeling the intricate connections between personality characteristics and observable activities, we establish a hierarchical structure of Needs, Task, and Activity levels. Integrating a Behavior Planner and a World State module allows for the dynamic sampling of behaviors using large language models (LLMs), ensuring that generated activities remain relevant and responsive to environmental changes. Extensive experiments validate the effectiveness and adaptability of our approach across diverse scenarios. This research makes a significant contribution to the field by establishing a new paradigm for personalized and context-aware interactions with virtual humans, ultimately enhancing user engagement in immersive applications. Our project website is at: https://behavior.agent-x.cn/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Don't They Really Hear Us? A Design Space for Private Conversations in Social Virtual Reality.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549844
Josephus Jasper Limbago, Robin Welsch, Florian Muller, Mario Di Francesco

Seamless transition between public dialogue and private talks is essential in everyday conversations. Social Virtual Reality (VR) has revolutionized interpersonal communication by creating a sense of closeness over distance through virtual avatars. However, existing social VR platforms are not successful in providing safety and supporting private conversations, thereby hindering self-disclosure and limiting the potential for meaningful experiences. We approach this problem by exploring the factors affecting private conversations in social VR applications, including the usability of different interaction methods and the awareness with respect to the virtual world. We conduct both expert interviews and a controlled experiment with a social VR prototype we realized. We then leverage the outcomes of the two studies to establish a design space that considers diverse dimensions (including privacy levels, social awareness, and modalities), laying the groundwork for more intuitive and meaningful experiences of private conversation in social VR.

{"title":"Don't They Really Hear Us? A Design Space for Private Conversations in Social Virtual Reality.","authors":"Josephus Jasper Limbago, Robin Welsch, Florian Muller, Mario Di Francesco","doi":"10.1109/TVCG.2025.3549844","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549844","url":null,"abstract":"<p><p>Seamless transition between public dialogue and private talks is essential in everyday conversations. Social Virtual Reality (VR) has revolutionized interpersonal communication by creating a sense of closeness over distance through virtual avatars. However, existing social VR platforms are not successful in providing safety and supporting private conversations, thereby hindering self-disclosure and limiting the potential for meaningful experiences. We approach this problem by exploring the factors affecting private conversations in social VR applications, including the usability of different interaction methods and the awareness with respect to the virtual world. We conduct both expert interviews and a controlled experiment with a social VR prototype we realized. We then leverage the outcomes of the two studies to establish a design space that considers diverse dimensions (including privacy levels, social awareness, and modalities), laying the groundwork for more intuitive and meaningful experiences of private conversation in social VR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Environment Spatial Restitution for Remote Physical AR Collaboration.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549533
Bruno Caby, Guillaume Bataille, Florence Danglade, Jean-Remy Chardonnet

The emergence of spatial immersive technologies allows new ways to collaborate remotely. However, they still need to be studied and enhanced in order to improve their effectiveness and usability for collaborators. Remote Physical Collaborative Extended Reality (RPC-XR) consists in solving augmented physical tasks with the help of remote collaborators. This paper presents our RPC-AR system and a user study evaluating this system during a network hardware assembly task. Our system offers verbal and non-verbal interpersonal communication functionalities. Users embody avatars and interact with their remote collaborators thanks to hand, head and eye tracking, and voice. Our system also captures an environment spatially, in real-time and renders it in a shared virtual space. We designed it to be lightweight and to avoid instrumenting collaborative environments and preliminary steps. It performs capture, transmission and remote rendering of real environments in less than 250ms. We ran a cascading user study to compare our system with a commercial 2D video collaborative application. We measured mutual awareness, task load, usability and task performance. We present an adapted Uncanny Valley questionnaire to compare the perception of remote environments between systems. We found that our application resulted in better empathy between collaborators, a higher cognitive load and a lower level of usability, remaining acceptable, to the remote user. We did not observe any significant difference in performance. These results are encouraging, as participants' observations provide insights to further improve the performance and usability of RPC-AR.

{"title":"Environment Spatial Restitution for Remote Physical AR Collaboration.","authors":"Bruno Caby, Guillaume Bataille, Florence Danglade, Jean-Remy Chardonnet","doi":"10.1109/TVCG.2025.3549533","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549533","url":null,"abstract":"<p><p>The emergence of spatial immersive technologies allows new ways to collaborate remotely. However, they still need to be studied and enhanced in order to improve their effectiveness and usability for collaborators. Remote Physical Collaborative Extended Reality (RPC-XR) consists in solving augmented physical tasks with the help of remote collaborators. This paper presents our RPC-AR system and a user study evaluating this system during a network hardware assembly task. Our system offers verbal and non-verbal interpersonal communication functionalities. Users embody avatars and interact with their remote collaborators thanks to hand, head and eye tracking, and voice. Our system also captures an environment spatially, in real-time and renders it in a shared virtual space. We designed it to be lightweight and to avoid instrumenting collaborative environments and preliminary steps. It performs capture, transmission and remote rendering of real environments in less than 250ms. We ran a cascading user study to compare our system with a commercial 2D video collaborative application. We measured mutual awareness, task load, usability and task performance. We present an adapted Uncanny Valley questionnaire to compare the perception of remote environments between systems. We found that our application resulted in better empathy between collaborators, a higher cognitive load and a lower level of usability, remaining acceptable, to the remote user. We did not observe any significant difference in performance. These results are encouraging, as participants' observations provide insights to further improve the performance and usability of RPC-AR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1