首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
X's Day: Personality-Driven Virtual Human Behavior Generation. X's Day:人格驱动的虚拟人行为生成。
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549574
Haoyang Li, Zan Wang, Wei Liang, Yizhuo Wang

Developing convincing and realistic virtual human behavior is essential for enhancing user experiences in virtual reality (VR) and augmented reality (AR) settings. This paper introduces a novel task focused on generating long-term behaviors for virtual agents, guided by specific personality traits and contextual elements within 3D environments. We present a comprehensive framework capable of autonomously producing daily activities autoregressively. By modeling the intricate connections between personality characteristics and observable activities, we establish a hierarchical structure of Needs, Task, and Activity levels. Integrating a Behavior Planner and a World State module allows for the dynamic sampling of behaviors using large language models (LLMs), ensuring that generated activities remain relevant and responsive to environmental changes. Extensive experiments validate the effectiveness and adaptability of our approach across diverse scenarios. This research makes a significant contribution to the field by establishing a new paradigm for personalized and context-aware interactions with virtual humans, ultimately enhancing user engagement in immersive applications. Our project website is at: https://behavior.agent-x.cn/.

在虚拟现实(VR)和增强现实(AR)环境中,开发令人信服且逼真的虚拟人行为对于提升用户体验至关重要。本文介绍了一项新任务,重点是在特定个性特征和三维环境中的上下文元素指导下,为虚拟代理生成长期行为。我们提出了一个能够自动生成日常活动的综合框架。通过对个性特征和可观察活动之间错综复杂的联系进行建模,我们建立了一个由需求、任务和活动三个层次组成的分层结构。将行为规划器和世界状态模块整合在一起,可以使用大型语言模型(LLM)对行为进行动态采样,确保生成的活动与环境变化保持相关并做出响应。大量实验验证了我们的方法在不同场景下的有效性和适应性。这项研究通过建立与虚拟人进行个性化和情境感知交互的新范例,为该领域做出了重大贡献,最终提高了用户在沉浸式应用中的参与度。我们的项目网站是:https://behavior.agent-x.cn/。
{"title":"X's Day: Personality-Driven Virtual Human Behavior Generation.","authors":"Haoyang Li, Zan Wang, Wei Liang, Yizhuo Wang","doi":"10.1109/TVCG.2025.3549574","DOIUrl":"10.1109/TVCG.2025.3549574","url":null,"abstract":"<p><p>Developing convincing and realistic virtual human behavior is essential for enhancing user experiences in virtual reality (VR) and augmented reality (AR) settings. This paper introduces a novel task focused on generating long-term behaviors for virtual agents, guided by specific personality traits and contextual elements within 3D environments. We present a comprehensive framework capable of autonomously producing daily activities autoregressively. By modeling the intricate connections between personality characteristics and observable activities, we establish a hierarchical structure of Needs, Task, and Activity levels. Integrating a Behavior Planner and a World State module allows for the dynamic sampling of behaviors using large language models (LLMs), ensuring that generated activities remain relevant and responsive to environmental changes. Extensive experiments validate the effectiveness and adaptability of our approach across diverse scenarios. This research makes a significant contribution to the field by establishing a new paradigm for personalized and context-aware interactions with virtual humans, ultimately enhancing user engagement in immersive applications. Our project website is at: https://behavior.agent-x.cn/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PaRUS: a Virtual Reality Shopping Method Focusing on Contextual Information between Products and Real Usage Scenes.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549539
Yinyu Lu, Weitao You, Ziqing Zheng, Yizhan Shao, Changyuan Yang, Zhibin Zhou

The development of AR and VR technologies is enhancing users' online shopping experiences in various ways. However, in existing VR shopping applications, shopping contexts merely refer to the products and virtual malls or metaphorical scenes where users select products. This leads to the defect that users can only imagine rather than intuitively feel whether the selected products are suitable for their real usage scenes, resulting in a significant discrepancy between their expectations before and after the purchase. To address this issue, we propose PaRUS, a VR shopping approach that focuses on the context between products and their real usage scenexns. PaRUS begins by rebuilding the virtual scenario of the products' real usage scene through a new semantic scene reconstruction pipeline (manual operation needed), which preserves both the structured scene and textured object models in the scene. Afterwards, intuitive visualization of how the selected products fit the reconstructed virtual scene is provided. We conducted two user studies to evaluate how PaRUS impacts user experience, behavior, and satisfaction with their purchase. The results indicated that PaRUS significantly reduced the perceived performance risk and improved users' trust and expectation with their results of purchase.

{"title":"PaRUS: a Virtual Reality Shopping Method Focusing on Contextual Information between Products and Real Usage Scenes.","authors":"Yinyu Lu, Weitao You, Ziqing Zheng, Yizhan Shao, Changyuan Yang, Zhibin Zhou","doi":"10.1109/TVCG.2025.3549539","DOIUrl":"10.1109/TVCG.2025.3549539","url":null,"abstract":"<p><p>The development of AR and VR technologies is enhancing users' online shopping experiences in various ways. However, in existing VR shopping applications, shopping contexts merely refer to the products and virtual malls or metaphorical scenes where users select products. This leads to the defect that users can only imagine rather than intuitively feel whether the selected products are suitable for their real usage scenes, resulting in a significant discrepancy between their expectations before and after the purchase. To address this issue, we propose PaRUS, a VR shopping approach that focuses on the context between products and their real usage scenexns. PaRUS begins by rebuilding the virtual scenario of the products' real usage scene through a new semantic scene reconstruction pipeline (manual operation needed), which preserves both the structured scene and textured object models in the scene. Afterwards, intuitive visualization of how the selected products fit the reconstructed virtual scene is provided. We conducted two user studies to evaluate how PaRUS impacts user experience, behavior, and satisfaction with their purchase. The results indicated that PaRUS significantly reduced the perceived performance risk and improved users' trust and expectation with their results of purchase.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Don't They Really Hear Us? A Design Space for Private Conversations in Social Virtual Reality.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549844
Josephus Jasper Limbago, Robin Welsch, Florian Muller, Mario Di Francesco

Seamless transition between public dialogue and private talks is essential in everyday conversations. Social Virtual Reality (VR) has revolutionized interpersonal communication by creating a sense of closeness over distance through virtual avatars. However, existing social VR platforms are not successful in providing safety and supporting private conversations, thereby hindering self-disclosure and limiting the potential for meaningful experiences. We approach this problem by exploring the factors affecting private conversations in social VR applications, including the usability of different interaction methods and the awareness with respect to the virtual world. We conduct both expert interviews and a controlled experiment with a social VR prototype we realized. We then leverage the outcomes of the two studies to establish a design space that considers diverse dimensions (including privacy levels, social awareness, and modalities), laying the groundwork for more intuitive and meaningful experiences of private conversation in social VR.

{"title":"Don't They Really Hear Us? A Design Space for Private Conversations in Social Virtual Reality.","authors":"Josephus Jasper Limbago, Robin Welsch, Florian Muller, Mario Di Francesco","doi":"10.1109/TVCG.2025.3549844","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549844","url":null,"abstract":"<p><p>Seamless transition between public dialogue and private talks is essential in everyday conversations. Social Virtual Reality (VR) has revolutionized interpersonal communication by creating a sense of closeness over distance through virtual avatars. However, existing social VR platforms are not successful in providing safety and supporting private conversations, thereby hindering self-disclosure and limiting the potential for meaningful experiences. We approach this problem by exploring the factors affecting private conversations in social VR applications, including the usability of different interaction methods and the awareness with respect to the virtual world. We conduct both expert interviews and a controlled experiment with a social VR prototype we realized. We then leverage the outcomes of the two studies to establish a design space that considers diverse dimensions (including privacy levels, social awareness, and modalities), laying the groundwork for more intuitive and meaningful experiences of private conversation in social VR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Environment Spatial Restitution for Remote Physical AR Collaboration.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549533
Bruno Caby, Guillaume Bataille, Florence Danglade, Jean-Remy Chardonnet

The emergence of spatial immersive technologies allows new ways to collaborate remotely. However, they still need to be studied and enhanced in order to improve their effectiveness and usability for collaborators. Remote Physical Collaborative Extended Reality (RPC-XR) consists in solving augmented physical tasks with the help of remote collaborators. This paper presents our RPC-AR system and a user study evaluating this system during a network hardware assembly task. Our system offers verbal and non-verbal interpersonal communication functionalities. Users embody avatars and interact with their remote collaborators thanks to hand, head and eye tracking, and voice. Our system also captures an environment spatially, in real-time and renders it in a shared virtual space. We designed it to be lightweight and to avoid instrumenting collaborative environments and preliminary steps. It performs capture, transmission and remote rendering of real environments in less than 250ms. We ran a cascading user study to compare our system with a commercial 2D video collaborative application. We measured mutual awareness, task load, usability and task performance. We present an adapted Uncanny Valley questionnaire to compare the perception of remote environments between systems. We found that our application resulted in better empathy between collaborators, a higher cognitive load and a lower level of usability, remaining acceptable, to the remote user. We did not observe any significant difference in performance. These results are encouraging, as participants' observations provide insights to further improve the performance and usability of RPC-AR.

{"title":"Environment Spatial Restitution for Remote Physical AR Collaboration.","authors":"Bruno Caby, Guillaume Bataille, Florence Danglade, Jean-Remy Chardonnet","doi":"10.1109/TVCG.2025.3549533","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549533","url":null,"abstract":"<p><p>The emergence of spatial immersive technologies allows new ways to collaborate remotely. However, they still need to be studied and enhanced in order to improve their effectiveness and usability for collaborators. Remote Physical Collaborative Extended Reality (RPC-XR) consists in solving augmented physical tasks with the help of remote collaborators. This paper presents our RPC-AR system and a user study evaluating this system during a network hardware assembly task. Our system offers verbal and non-verbal interpersonal communication functionalities. Users embody avatars and interact with their remote collaborators thanks to hand, head and eye tracking, and voice. Our system also captures an environment spatially, in real-time and renders it in a shared virtual space. We designed it to be lightweight and to avoid instrumenting collaborative environments and preliminary steps. It performs capture, transmission and remote rendering of real environments in less than 250ms. We ran a cascading user study to compare our system with a commercial 2D video collaborative application. We measured mutual awareness, task load, usability and task performance. We present an adapted Uncanny Valley questionnaire to compare the perception of remote environments between systems. We found that our application resulted in better empathy between collaborators, a higher cognitive load and a lower level of usability, remaining acceptable, to the remote user. We did not observe any significant difference in performance. These results are encouraging, as participants' observations provide insights to further improve the performance and usability of RPC-AR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EX-Gaze: High-frequency and Low-latency Gaze Tracking with Hybrid Event-frame Cameras for On-Device Extended Reality.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549565
Ning Chen, Yiran Shen, Tongyu Zhang, Yanni Yang, Hongkai Wen

The integration of gaze/eye tracking into virtual and augmented reality devices has unlocked new possibilities, offering a novel human-computer interaction (HCI) modality for on-device extended reality (XR). Emerging applications in XR, such as low-effort user authentication, mental health diagnosis, and foveated rendering, demand real-time eye tracking at high frequencies, a capability that current solutions struggle to deliver. To address this challenge, we present EX-Gaze, an event-based real-time eye tracking system designed for on-device extended reality. EX-Gaze achieves a high tracking frequency of 2KHz, providing decent accuracy and low tracking latency. The exceptional tracking frequency of EX-Gaze is achieved through the use of event cameras, cutting-edge, bio-inspired vision hardware that delivers event-stream output at high temporal resolution. We have developed a lightweight tracking framework that enables real-time pupil region localization and tracking on mobile devices. To effectively leverage the sparse nature of event-streams, we introduce the sparse event-patch representation and the corresponding sparse event patches transformer as key components to reduce computational time. Implemented on Jetson Orin Nano, a low-cost, small-sized mobile device with hybrid GPU and CPU components capable of parallel processing of multiple deep neural networks, EX-Gaze maximizes the computation power of Jetson Orin Nano through sophisticated computation scheduling and offloading between GPUs and CPUs. This enables EX-Gaze to achieve real-time tracking at 2KHz without accumulating latency. Evaluation on public datasets demonstrates that EX-Gaze outperforms other event-based eye tracking methods by striking the best balance between accuracy and efficiency on mobile devices. These results highlight EX-Gaze's potential as a groundbreaking technology to support XR applications that require high-frequency and real-time eye tracking. The code is available at https://github.com/Ningreka/EX-Gaze.

{"title":"EX-Gaze: High-frequency and Low-latency Gaze Tracking with Hybrid Event-frame Cameras for On-Device Extended Reality.","authors":"Ning Chen, Yiran Shen, Tongyu Zhang, Yanni Yang, Hongkai Wen","doi":"10.1109/TVCG.2025.3549565","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549565","url":null,"abstract":"<p><p>The integration of gaze/eye tracking into virtual and augmented reality devices has unlocked new possibilities, offering a novel human-computer interaction (HCI) modality for on-device extended reality (XR). Emerging applications in XR, such as low-effort user authentication, mental health diagnosis, and foveated rendering, demand real-time eye tracking at high frequencies, a capability that current solutions struggle to deliver. To address this challenge, we present EX-Gaze, an event-based real-time eye tracking system designed for on-device extended reality. EX-Gaze achieves a high tracking frequency of 2KHz, providing decent accuracy and low tracking latency. The exceptional tracking frequency of EX-Gaze is achieved through the use of event cameras, cutting-edge, bio-inspired vision hardware that delivers event-stream output at high temporal resolution. We have developed a lightweight tracking framework that enables real-time pupil region localization and tracking on mobile devices. To effectively leverage the sparse nature of event-streams, we introduce the sparse event-patch representation and the corresponding sparse event patches transformer as key components to reduce computational time. Implemented on Jetson Orin Nano, a low-cost, small-sized mobile device with hybrid GPU and CPU components capable of parallel processing of multiple deep neural networks, EX-Gaze maximizes the computation power of Jetson Orin Nano through sophisticated computation scheduling and offloading between GPUs and CPUs. This enables EX-Gaze to achieve real-time tracking at 2KHz without accumulating latency. Evaluation on public datasets demonstrates that EX-Gaze outperforms other event-based eye tracking methods by striking the best balance between accuracy and efficiency on mobile devices. These results highlight EX-Gaze's potential as a groundbreaking technology to support XR applications that require high-frequency and real-time eye tracking. The code is available at https://github.com/Ningreka/EX-Gaze.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Reality Impacts on Novice Programmers' Self-efficacy.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549567
Nanlin Sun, Wallace S Lages

Virtual Reality has been used to improve motivation and help in the visualization of complex computing topics. However, few studies directly compared immersive and non-immersive environments. To address this limitation, we developed Abacus, a programming environment that can run in both immersive and non-immersive modes. We conducted a between-subjects study (n=40), with twenty participants assigned to the desktop mode and twenty participants assigned to the VR mode. Participants used a block-based editor to complete two programming tasks: a non-spatial procedural task, and a spatial 3D math task. We found that VR led to higher gains in self-efficacy and that the gain was significant for participants with lower initial levels of experience and spatial skills.

{"title":"Virtual Reality Impacts on Novice Programmers' Self-efficacy.","authors":"Nanlin Sun, Wallace S Lages","doi":"10.1109/TVCG.2025.3549567","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549567","url":null,"abstract":"<p><p>Virtual Reality has been used to improve motivation and help in the visualization of complex computing topics. However, few studies directly compared immersive and non-immersive environments. To address this limitation, we developed Abacus, a programming environment that can run in both immersive and non-immersive modes. We conducted a between-subjects study (n=40), with twenty participants assigned to the desktop mode and twenty participants assigned to the VR mode. Participants used a block-based editor to complete two programming tasks: a non-spatial procedural task, and a spatial 3D math task. We found that VR led to higher gains in self-efficacy and that the gain was significant for participants with lower initial levels of experience and spatial skills.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Proprioceptive Attenuation with Noisy Tendon Electrical Stimulation on Adaptation to beyond-Real Interaction. 嘈杂肌腱电刺激对超现实互动适应性的影响
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549562
Maki Ogawa, Keigo Matsumoto, Kazuma Aoyama, Takuji Narumi

Virtual reality (VR) enables beyond-real interactions (BRI) that transcend physical constraints, offering effective user experiences like extending a hand to grasp distant objects. However, adapting to novel mappings of BRI often reduces performance and the sense of embodiment. To address this, we propose using noisy tendon electrical stimulation (n-TES) to decrease proprioceptive precision. Previous studies have suggested that attenuating proprioceptive precision is crucial for sensory-motor adaptations. Thus, we hypothesize that n-TES, which has been shown to reduce proprioceptive precision and induce visual-dependent perception in VR, can enhance user adaptation to BRI. We conducted a user study using go-go interaction, a BRI technique for interacting with distant objects, to assess the effects of n-TES. Given the individual variability in n-TES response, participants first underwent a proprioceptive precision test to determine the optimal stimulation intensity to lower the proprioceptive precision from 5 levels (σ = 0.25 - 1.25 mA). Reaching tasks using a 2x2 within-participants design evaluated the effects of go-go interaction and n-TES on performance, subjective task load, and embodiment. Results from 24 participants showed that go-go interaction increased reaching time and task load while decreasing the sense of embodiment. Contrary to our hypothesis, n-TES did not significantly mitigate most of these negative effects of go-go interaction, except that perceived agency was higher with n-TES during go-go interaction. The limited effectiveness of n-TES may be due to participants' habituation or sensory adaptation during the tasks. Future research should consider the adaptation process to BRI and investigate different BRI scenarios.

{"title":"Effects of Proprioceptive Attenuation with Noisy Tendon Electrical Stimulation on Adaptation to beyond-Real Interaction.","authors":"Maki Ogawa, Keigo Matsumoto, Kazuma Aoyama, Takuji Narumi","doi":"10.1109/TVCG.2025.3549562","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549562","url":null,"abstract":"<p><p>Virtual reality (VR) enables beyond-real interactions (BRI) that transcend physical constraints, offering effective user experiences like extending a hand to grasp distant objects. However, adapting to novel mappings of BRI often reduces performance and the sense of embodiment. To address this, we propose using noisy tendon electrical stimulation (n-TES) to decrease proprioceptive precision. Previous studies have suggested that attenuating proprioceptive precision is crucial for sensory-motor adaptations. Thus, we hypothesize that n-TES, which has been shown to reduce proprioceptive precision and induce visual-dependent perception in VR, can enhance user adaptation to BRI. We conducted a user study using go-go interaction, a BRI technique for interacting with distant objects, to assess the effects of n-TES. Given the individual variability in n-TES response, participants first underwent a proprioceptive precision test to determine the optimal stimulation intensity to lower the proprioceptive precision from 5 levels (σ = 0.25 - 1.25 mA). Reaching tasks using a 2x2 within-participants design evaluated the effects of go-go interaction and n-TES on performance, subjective task load, and embodiment. Results from 24 participants showed that go-go interaction increased reaching time and task load while decreasing the sense of embodiment. Contrary to our hypothesis, n-TES did not significantly mitigate most of these negative effects of go-go interaction, except that perceived agency was higher with n-TES during go-go interaction. The limited effectiveness of n-TES may be due to participants' habituation or sensory adaptation during the tasks. Future research should consider the adaptation process to BRI and investigate different BRI scenarios.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective VR Intervention to Reduce Implicit Bias towards People with Physical Disabilities: The Interplay between Experience Design and Individual Characteristics.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549532
Hyuckjin Jang, Jeongmi Lee

Recent studies utilized virtual reality (VR) as an "empathy machine" to mitigate bias towards various social groups. However, studies addressing bias against physical disabilities remain scarce, with inconsistent results based on VR experience design. Moreover, most studies assumed the universal effects of VR simulation on bias reduction, ignoring the potential moderating effects of individual characteristics. This study investigated how experience design components and individual characteristics moderate VR simulation's effect on changes in bias towards physical disabilities. We designed a VR wheelchair experience, manipulating the situational context (negative, neutral) and whole-body avatar visualization (visible, invisible). Participants' implicit and explicit bias levels were assessed to examine the changes according to VR design components and individual characteristics (gender, preexisting bias level). Results indicated that following the VR intervention, implicit bias was reduced in the group with higher preexisting bias but rather increased in the group with lower preexisting bias. In addition, gender interacted with avatar visualization such that male participants' implicit bias was reduced with invisible avatars but increased with visible avatars. Explicit bias, in contrast, was reduced regardless of conditions, suggesting the potential response bias in self-report measures. These findings underscore the importance of considering the complex interplay between experience design and individual characteristics in understanding VR's efficacy as an empathy-inducing tool. This study provides insights and guidelines for developing more effective VR interventions to alleviate implicit bias towards physical disabilities.

{"title":"Effective VR Intervention to Reduce Implicit Bias towards People with Physical Disabilities: The Interplay between Experience Design and Individual Characteristics.","authors":"Hyuckjin Jang, Jeongmi Lee","doi":"10.1109/TVCG.2025.3549532","DOIUrl":"10.1109/TVCG.2025.3549532","url":null,"abstract":"<p><p>Recent studies utilized virtual reality (VR) as an \"empathy machine\" to mitigate bias towards various social groups. However, studies addressing bias against physical disabilities remain scarce, with inconsistent results based on VR experience design. Moreover, most studies assumed the universal effects of VR simulation on bias reduction, ignoring the potential moderating effects of individual characteristics. This study investigated how experience design components and individual characteristics moderate VR simulation's effect on changes in bias towards physical disabilities. We designed a VR wheelchair experience, manipulating the situational context (negative, neutral) and whole-body avatar visualization (visible, invisible). Participants' implicit and explicit bias levels were assessed to examine the changes according to VR design components and individual characteristics (gender, preexisting bias level). Results indicated that following the VR intervention, implicit bias was reduced in the group with higher preexisting bias but rather increased in the group with lower preexisting bias. In addition, gender interacted with avatar visualization such that male participants' implicit bias was reduced with invisible avatars but increased with visible avatars. Explicit bias, in contrast, was reduced regardless of conditions, suggesting the potential response bias in self-report measures. These findings underscore the importance of considering the complex interplay between experience design and individual characteristics in understanding VR's efficacy as an empathy-inducing tool. This study provides insights and guidelines for developing more effective VR interventions to alleviate implicit bias towards physical disabilities.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MPGS: Multi-plane Gaussian Splatting for Compact Scenes Rendering.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549551
Deqi Li, Shi-Sheng Huang, Hua Huang

Accurate reconstruction of heterogeneous scenes for high-fidelity rendering in an efficient manner remains a crucial but challenging task in many Virtual Reality and Augmented Reality applications. The recent 3D Gaussian Splatting (3DGS) has shown impressive quality in scene rendering with real-time performance. However, for heterogeneous scenes with many weak-textured regions, the original 3DGS can easily produce numerously wrong floaters with unbalanced reconstruction using redundant 3D Gaussians, which often leads to unsatisfied scene rendering. This paper proposes a novel multi-plane Gaussian Splatting (MPGS), which aims to achieve high-fidelity rendering with compact reconstruction for heterogeneous scenes. The key insight of our MPGS is the introduction of a novel multi-plane Gaussian optimization strategy, which effectively adjusts the Gaussian distribution for both rich-textured and weak-textured regions in heterogeneous scenes. Moreover, we further propose a multi-scale geometric correction mechanism to effectively mitigate degradation of the 3D Gaussian distribution for compact scene reconstruction. Besides, we regularize the Gaussian distributions using normal information extracted from the compact scene learning. Experimental results on public datasets demonstrate that the proposed MPGS achieves much better rendering quality compared to previous methods, while using less storage and offering more efficient rendering. To our best knowledge, MPGS is a new state-of-the-art 3D Gaussian splatting method for compact reconstruction of heterogeneous scenes, enabling high-fidelity rendering in novel view synthesis, especially improving rendering quality for weak-textured regions. The code will be released at https://github.com/wanglids/MPGS.

{"title":"MPGS: Multi-plane Gaussian Splatting for Compact Scenes Rendering.","authors":"Deqi Li, Shi-Sheng Huang, Hua Huang","doi":"10.1109/TVCG.2025.3549551","DOIUrl":"10.1109/TVCG.2025.3549551","url":null,"abstract":"<p><p>Accurate reconstruction of heterogeneous scenes for high-fidelity rendering in an efficient manner remains a crucial but challenging task in many Virtual Reality and Augmented Reality applications. The recent 3D Gaussian Splatting (3DGS) has shown impressive quality in scene rendering with real-time performance. However, for heterogeneous scenes with many weak-textured regions, the original 3DGS can easily produce numerously wrong floaters with unbalanced reconstruction using redundant 3D Gaussians, which often leads to unsatisfied scene rendering. This paper proposes a novel multi-plane Gaussian Splatting (MPGS), which aims to achieve high-fidelity rendering with compact reconstruction for heterogeneous scenes. The key insight of our MPGS is the introduction of a novel multi-plane Gaussian optimization strategy, which effectively adjusts the Gaussian distribution for both rich-textured and weak-textured regions in heterogeneous scenes. Moreover, we further propose a multi-scale geometric correction mechanism to effectively mitigate degradation of the 3D Gaussian distribution for compact scene reconstruction. Besides, we regularize the Gaussian distributions using normal information extracted from the compact scene learning. Experimental results on public datasets demonstrate that the proposed MPGS achieves much better rendering quality compared to previous methods, while using less storage and offering more efficient rendering. To our best knowledge, MPGS is a new state-of-the-art 3D Gaussian splatting method for compact reconstruction of heterogeneous scenes, enabling high-fidelity rendering in novel view synthesis, especially improving rendering quality for weak-textured regions. The code will be released at https://github.com/wanglids/MPGS.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding Learner Behavior in Virtual Reality Education: Insights from Epistemic Network Analysis and Differential Sequence Mining.
Pub Date : 2025-03-10 DOI: 10.1109/TVCG.2025.3549899
Antony Prakash, Ramkumar Rajendran

The integration of immersive Virtual Reality (I-VR) technology in education has emerged as a promising approach for enhancing learning experiences. There is a handful of research done to study the impact of I-VR on learning outcomes, comparison of learning using I-VR and other traditional learning methods, and the impact of values such as haptic sensation, and verbal and non-verbal cues on the learning outcomes. However, there is a dearth of research on understanding how learning is happening from the perspective of the behavior of the learners in the Virtual Reality Learning Environment (VRLE). To address this gap, we developed an Interaction Behavioral Data (IBD) logging mechanism to log all the interaction traces that constitute the behavior of the learners in a Virtual Reality Learning Environment (VRLE). We deployed the IBD logging mechanism in a VRLE used to learn electromagnetic induction concepts and conducted a study with 30 undergraduate computer science students. We extract the learners' actions from the logged data and contextualize them based on the action features such as duration (Long and Short), and frequency of occurrence (First and Repeated occurrence). In this paper, we investigate the actions extracted from logged interaction trace data to understand the behaviors that lead to high and low performance in the VRLE. Using Epistemic Network Analysis (ENA), we identify differences in prominent actions and co-occurring actions between high and low performers. Additionally, we apply Differential Sequence Mining (DSM) to uncover significant action patterns, involving multiple actions, that are differentially frequent between these two groups. Our findings demonstrate that high performers engage in structured, iterative patterns of experimentation and evaluation, while low performers exhibit less focused exploration patterns. The insights gained from ENA and DSM highlight the behavioral variations between high and low performers in the VRLE, providing valuable information for enhancing learning experiences in VRLEs. These insights gained can be further utilized by the VR content developers to o develop adaptive VR learning content by providing personalized scaffolding leading to the enhancement in the learning process via I-VR.

{"title":"Decoding Learner Behavior in Virtual Reality Education: Insights from Epistemic Network Analysis and Differential Sequence Mining.","authors":"Antony Prakash, Ramkumar Rajendran","doi":"10.1109/TVCG.2025.3549899","DOIUrl":"10.1109/TVCG.2025.3549899","url":null,"abstract":"<p><p>The integration of immersive Virtual Reality (I-VR) technology in education has emerged as a promising approach for enhancing learning experiences. There is a handful of research done to study the impact of I-VR on learning outcomes, comparison of learning using I-VR and other traditional learning methods, and the impact of values such as haptic sensation, and verbal and non-verbal cues on the learning outcomes. However, there is a dearth of research on understanding how learning is happening from the perspective of the behavior of the learners in the Virtual Reality Learning Environment (VRLE). To address this gap, we developed an Interaction Behavioral Data (IBD) logging mechanism to log all the interaction traces that constitute the behavior of the learners in a Virtual Reality Learning Environment (VRLE). We deployed the IBD logging mechanism in a VRLE used to learn electromagnetic induction concepts and conducted a study with 30 undergraduate computer science students. We extract the learners' actions from the logged data and contextualize them based on the action features such as duration (Long and Short), and frequency of occurrence (First and Repeated occurrence). In this paper, we investigate the actions extracted from logged interaction trace data to understand the behaviors that lead to high and low performance in the VRLE. Using Epistemic Network Analysis (ENA), we identify differences in prominent actions and co-occurring actions between high and low performers. Additionally, we apply Differential Sequence Mining (DSM) to uncover significant action patterns, involving multiple actions, that are differentially frequent between these two groups. Our findings demonstrate that high performers engage in structured, iterative patterns of experimentation and evaluation, while low performers exhibit less focused exploration patterns. The insights gained from ENA and DSM highlight the behavioral variations between high and low performers in the VRLE, providing valuable information for enhancing learning experiences in VRLEs. These insights gained can be further utilized by the VR content developers to o develop adaptive VR learning content by providing personalized scaffolding leading to the enhancement in the learning process via I-VR.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1