首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
Shape It Up: An Empirically Grounded Approach for Designing Shape Palettes. 塑造它:基于经验的形状调色板设计方法》。
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456385
Chin Tseng, Arran Zeyu Wang, Ghulam Jilani Quadri, Danielle Albers Szafir

Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Unlike color, shapes can not be represented by a numerical space, making it difficult to propose general guidelines or design heuristics for using shape effectively. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks: relative mean judgment tasks, expert preference, and correlation estimation. Our results show that conventional means for reasoning about shapes, such as filled versus unfilled, are insufficient to inform effective palette design. Further, even expert palettes vary significantly in their use of shape and corresponding effectiveness. To support effective shape palette design, we developed a model based on pairwise relations between shapes in our experiments and the number of shapes required for a given design. We embed this model in a palette design tool to give designers agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances understanding of shape perception in visualization contexts and provides practical design guidelines that can help improve categorical data encodings.

形状通常用于区分多类别散点图中的类别。然而,现有的选择有效形状调色板的指南主要依靠直觉,并没有考虑随着类别数量的增加,这些需求会如何变化。与颜色不同,形状不能用数字空间来表示,因此很难提出有效使用形状的通用指南或设计启发式方法。本文介绍了一系列四项实验,评估了 39 种形状在三项任务中的效率:相对平均值判断任务、专家偏好和相关性估计。我们的结果表明,对形状进行推理的传统方法,如填充与非填充,不足以指导有效的调色板设计。此外,即使是专家调色板,在使用形状和相应的有效性方面也存在很大差异。为了支持有效的形状调色板设计,我们根据实验中形状之间的配对关系以及特定设计所需的形状数量开发了一个模型。我们将这一模型嵌入到调色板设计工具中,使设计者能够自主选择形状,同时将我们研究中捕捉到的感知性能的经验要素纳入其中。我们的模型加深了人们对可视化环境中形状感知的理解,并提供了有助于改进分类数据编码的实用设计指南。
{"title":"Shape It Up: An Empirically Grounded Approach for Designing Shape Palettes.","authors":"Chin Tseng, Arran Zeyu Wang, Ghulam Jilani Quadri, Danielle Albers Szafir","doi":"10.1109/TVCG.2024.3456385","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456385","url":null,"abstract":"<p><p>Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Unlike color, shapes can not be represented by a numerical space, making it difficult to propose general guidelines or design heuristics for using shape effectively. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks: relative mean judgment tasks, expert preference, and correlation estimation. Our results show that conventional means for reasoning about shapes, such as filled versus unfilled, are insufficient to inform effective palette design. Further, even expert palettes vary significantly in their use of shape and corresponding effectiveness. To support effective shape palette design, we developed a model based on pairwise relations between shapes in our experiments and the number of shapes required for a given design. We embed this model in a palette design tool to give designers agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances understanding of shape perception in visualization contexts and provides practical design guidelines that can help improve categorical data encodings.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HaptoFloater: Visuo-Haptic Augmented Reality by Embedding Imperceptible Color Vibration Signals for Tactile Display Control in a Mid-Air Image HaptoFloater:通过在半空图像中嵌入用于触觉显示控制的可感知彩色振动信号,实现视觉-触觉增强现实技术
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456175
Rina Nagano;Takahiro Kinoshita;Shingo Hattori;Yuichi Hiroi;Yuta Itoh;Takefumi Hiraki
We propose HaptoFloater, a low-latency mid-air visuo-haptic augmented reality (VHAR) system that utilizes imperceptible color vibrations. When adding tactile stimuli to the visual information of a mid-air image, the user should not perceive the latency between the tactile and visual information. However, conventional tactile presentation methods for mid-air images, based on camera-detected fingertip positioning, introduce latency due to image processing and communication. To mitigate this latency, we use a color vibration technique; humans cannot perceive the vibration when the display alternates between two different color stimuli at a frequency of 25 Hz or higher. In our system, we embed this imperceptible color vibration into the mid-air image formed by a micromirror array plate, and a photodiode on the fingertip device directly detects this color vibration to provide tactile stimulation. Thus, our system allows for the tactile perception of multiple patterns on a mid-air image in 59.5 ms. In addition, we evaluate the visual-haptic delay tolerance on a mid-air display using our VHAR system and a tactile actuator with a single pattern and faster response time. The results of our user study indicate a visual-haptic delay tolerance of 110.6 ms, which is considerably larger than the latency associated with systems using multiple tactile patterns.
我们提出的 HaptoFloater 是一种低延迟的半空视觉-触觉增强现实(VHAR)系统,它利用了不易察觉的颜色振动。当在半空中图像的视觉信息中添加触觉刺激时,用户应该感觉不到触觉信息和视觉信息之间的延迟。然而,传统的半空图像触觉呈现方法基于相机检测到的指尖定位,会因图像处理和通信而产生延迟。为了减少这种延迟,我们采用了色彩振动技术;当显示屏以 25 赫兹或更高的频率交替显示两种不同的色彩刺激时,人类无法感知振动。在我们的系统中,我们将这种不易察觉的色彩振动嵌入微镜阵列板形成的半空图像中,指尖装置上的光电二极管直接检测这种色彩振动,从而提供触觉刺激。因此,我们的系统可在 59.5 毫秒内对半空图像上的多个图案进行触觉感知。此外,我们还评估了使用我们的 VHAR 系统和具有单一图案和更快响应时间的触觉致动器在半空中显示屏上的视觉-触觉延迟容忍度。用户研究结果表明,视觉-触觉延迟耐受时间为 110.6 毫秒,大大高于使用多种触觉图案的系统的延迟时间。
{"title":"HaptoFloater: Visuo-Haptic Augmented Reality by Embedding Imperceptible Color Vibration Signals for Tactile Display Control in a Mid-Air Image","authors":"Rina Nagano;Takahiro Kinoshita;Shingo Hattori;Yuichi Hiroi;Yuta Itoh;Takefumi Hiraki","doi":"10.1109/TVCG.2024.3456175","DOIUrl":"10.1109/TVCG.2024.3456175","url":null,"abstract":"We propose HaptoFloater, a low-latency mid-air visuo-haptic augmented reality (VHAR) system that utilizes imperceptible color vibrations. When adding tactile stimuli to the visual information of a mid-air image, the user should not perceive the latency between the tactile and visual information. However, conventional tactile presentation methods for mid-air images, based on camera-detected fingertip positioning, introduce latency due to image processing and communication. To mitigate this latency, we use a color vibration technique; humans cannot perceive the vibration when the display alternates between two different color stimuli at a frequency of 25 Hz or higher. In our system, we embed this imperceptible color vibration into the mid-air image formed by a micromirror array plate, and a photodiode on the fingertip device directly detects this color vibration to provide tactile stimulation. Thus, our system allows for the tactile perception of multiple patterns on a mid-air image in 59.5 ms. In addition, we evaluate the visual-haptic delay tolerance on a mid-air display using our VHAR system and a tactile actuator with a single pattern and faster response time. The results of our user study indicate a visual-haptic delay tolerance of 110.6 ms, which is considerably larger than the latency associated with systems using multiple tactile patterns.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7463-7472"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MobiTangibles: Enabling Physical Manipulation Experiences of Virtual Precision Hand-Held Tools' Miniature Control in VR MobiTangibles:在 VR 中实现虚拟精密手持工具微型控制的物理操纵体验
Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456191
Abhijeet Mishra;Harshvardhan Singh;Aman Parnami;Jainendra Shukla
Realistic simulation for miniature control interactions, typically identified by precise and confined motions, commonly found in precision hand-held tools, like calipers, powered engravers, retractable knives, etc., are beneficial for skill training associated with these kinds of tools in virtual reality (VR) environments. However, existing approaches aiming to simulate hand-held tools' miniature control manipulation experiences in VR entail prototyping complexity and require expertise, posing challenges for novice users and individuals with limited resources. Addressing this challenge, we introduce MobiTangibles—proxies for precision hand-held tools' miniature control interactions utilizing smartphone-based magnetic field sensing. MobiTangibles passively replicate fundamental miniature control experiences associated with hand-held tools, such as single-axis translation and rotation, enabling quick and easy use for diverse VR scenarios without requiring extensive technical knowledge. We conducted a comprehensive technical evaluation to validate the functionality of MobiTangibles across diverse settings, including evaluations for electromagnetic interference within indoor environments. In a user-centric evaluation involving 15 participants across bare hands, VR controllers, and MobiTangibles conditions, we further assessed the quality of miniaturized manipulation experiences in VR. Our findings indicate that MobiTangibles outperformed conventional methods in realism and fatigue, receiving positive feedback.
精密手持工具(如卡尺、电动雕刻机、伸缩刀等)中常见的微型控制交互通常由精确和有限的运动来识别,对微型控制交互的真实模拟有利于在虚拟现实(VR)环境中进行与这类工具相关的技能培训。然而,现有的旨在模拟 VR 中手持工具微型控制操作体验的方法需要复杂的原型设计和专业知识,这给新手用户和资源有限的个人带来了挑战。为了应对这一挑战,我们推出了 MobiTangibles--利用基于智能手机的磁场感应来模拟精密手持工具的微型控制交互。MobiTangibles 可被动复制与手持工具相关的基本微型控制体验,例如单轴平移和旋转,从而无需丰富的技术知识即可快速、轻松地用于各种 VR 场景。我们进行了全面的技术评估,以验证 MobiTangibles 在不同环境下的功能,包括室内环境电磁干扰评估。在一项以用户为中心的评估中,我们进一步评估了在 VR 中微型操作体验的质量,共有 15 名参与者参与了徒手、VR 控制器和 MobiTangibles 的评估。我们的研究结果表明,MobiTangibles 在逼真度和疲劳度方面优于传统方法,获得了积极的反馈。
{"title":"MobiTangibles: Enabling Physical Manipulation Experiences of Virtual Precision Hand-Held Tools' Miniature Control in VR","authors":"Abhijeet Mishra;Harshvardhan Singh;Aman Parnami;Jainendra Shukla","doi":"10.1109/TVCG.2024.3456191","DOIUrl":"10.1109/TVCG.2024.3456191","url":null,"abstract":"Realistic simulation for miniature control interactions, typically identified by precise and confined motions, commonly found in precision hand-held tools, like calipers, powered engravers, retractable knives, etc., are beneficial for skill training associated with these kinds of tools in virtual reality (VR) environments. However, existing approaches aiming to simulate hand-held tools' miniature control manipulation experiences in VR entail prototyping complexity and require expertise, posing challenges for novice users and individuals with limited resources. Addressing this challenge, we introduce MobiTangibles—proxies for precision hand-held tools' miniature control interactions utilizing smartphone-based magnetic field sensing. MobiTangibles passively replicate fundamental miniature control experiences associated with hand-held tools, such as single-axis translation and rotation, enabling quick and easy use for diverse VR scenarios without requiring extensive technical knowledge. We conducted a comprehensive technical evaluation to validate the functionality of MobiTangibles across diverse settings, including evaluations for electromagnetic interference within indoor environments. In a user-centric evaluation involving 15 participants across bare hands, VR controllers, and MobiTangibles conditions, we further assessed the quality of miniaturized manipulation experiences in VR. Our findings indicate that MobiTangibles outperformed conventional methods in realism and fatigue, receiving positive feedback.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7321-7331"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Avatars to Agents: Self-Related Cues Through Embodiment and Personalization Affect Body Perception in Virtual Reality 从头像到代理:通过体现和个性化的自我相关线索影响虚拟现实中的身体感知
Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456211
Marie Luisa Fielder;Erik Wolf;Nina Döllinger;David Mal;Mario Botsch;Marc Erich Latoschik;Carolin Wienrich
Our work investigates the influence of self-related cues in the design of virtual humans on body perception in virtual reality. In a $2times 2$ mixed design, 64 participants faced photorealistic virtual humans either as a motion-synchronized embodied avatar or as an autonomous moving agent, appearing subsequently with a personalized and generic texture. Our results unveil that self-related cues through embodiment and personalization yield an individual and complemented increase in participants' sense of embodiment and self-identification towards the virtual human. Different body weight modification and estimation tasks further showed an impact of both factors on participants' body weight perception. Additional analyses revealed that the participant's body mass index predicted body weight estimations in all conditions and that participants' self-esteem and body shape concerns correlated with different body weight perception results. Hence, we have demonstrated the occurrence of double standards through induced self-related cues in virtual human perception, especially through embodiment.
我们的研究调查了虚拟人设计中自我相关线索对虚拟现实中身体感知的影响。在一个2美元乘2美元的混合设计中,64名参与者面对的是逼真的虚拟人,要么是运动同步的化身,要么是自主移动的代理,随后出现的是个性化和通用的纹理。我们的研究结果表明,通过化身和个性化的自我相关线索,参与者对虚拟人的化身感和自我认同感得到了个性化和互补性的提高。不同的体重修改和估计任务进一步显示了这两个因素对参与者体重感知的影响。其他分析表明,在所有条件下,参与者的体重指数都能预测体重估算结果,而且参与者的自尊心和对体形的关注与不同的体重感知结果相关。因此,我们证明了在虚拟人感知中通过诱导自我相关线索,特别是通过体现,会出现双重标准。
{"title":"From Avatars to Agents: Self-Related Cues Through Embodiment and Personalization Affect Body Perception in Virtual Reality","authors":"Marie Luisa Fielder;Erik Wolf;Nina Döllinger;David Mal;Mario Botsch;Marc Erich Latoschik;Carolin Wienrich","doi":"10.1109/TVCG.2024.3456211","DOIUrl":"10.1109/TVCG.2024.3456211","url":null,"abstract":"Our work investigates the influence of self-related cues in the design of virtual humans on body perception in virtual reality. In a $2times 2$ mixed design, 64 participants faced photorealistic virtual humans either as a motion-synchronized embodied avatar or as an autonomous moving agent, appearing subsequently with a personalized and generic texture. Our results unveil that self-related cues through embodiment and personalization yield an individual and complemented increase in participants' sense of embodiment and self-identification towards the virtual human. Different body weight modification and estimation tasks further showed an impact of both factors on participants' body weight perception. Additional analyses revealed that the participant's body mass index predicted body weight estimations in all conditions and that participants' self-esteem and body shape concerns correlated with different body weight perception results. Hence, we have demonstrated the occurrence of double standards through induced self-related cues in virtual human perception, especially through embodiment.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7386-7396"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Searching Across Realities: Investigating ERPs and Eye-Tracking Correlates of Visual Search in Mixed Reality 跨现实搜索:混合现实中视觉搜索的ERP和眼动跟踪相关性研究
Pub Date : 2024-09-12 DOI: 10.1109/TVCG.2024.3456172
Francesco Chiossi;Ines Trautmannsheimer;Changkun Ou;Uwe Gruenefeld;Sven Mayer
Mixed Reality allows us to integrate virtual and physical content into users' environments seamlessly. Yet, how this fusion affects perceptual and cognitive resources and our ability to find virtual or physical objects remains uncertain. Displaying virtual and physical information simultaneously might lead to divided attention and increased visual complexity, impacting users' visual processing, performance, and workload. In a visual search task, we asked participants to locate virtual and physical objects in Augmented Reality and Augmented Virtuality to understand the effects on performance. We evaluated search efficiency and attention allocation for virtual and physical objects using event-related potentials, fixation and saccade metrics, and behavioral measures. We found that users were more efficient in identifying objects in Augmented Virtuality, while virtual objects gained saliency in Augmented Virtuality. This suggests that visual fidelity might increase the perceptual load of the scene. Reduced amplitude in distractor positivity ERP, and fixation patterns supported improved distractor suppression and search efficiency in Augmented Virtuality. We discuss design implications for mixed reality adaptive systems based on physiological inputs for interaction.
混合现实技术让我们能够将虚拟内容和实体内容完美地融入用户的环境中。然而,这种融合如何影响感知和认知资源,以及我们寻找虚拟或物理对象的能力,目前仍不确定。同时显示虚拟和物理信息可能会导致注意力分散和视觉复杂性增加,从而影响用户的视觉处理、表现和工作量。在一项视觉搜索任务中,我们要求参与者在增强现实和增强虚拟中找到虚拟和物理对象,以了解其对性能的影响。我们使用事件相关电位、固定和囊状移动度量以及行为测量来评估虚拟和物理对象的搜索效率和注意力分配。我们发现,用户在增强虚拟现实中识别物体的效率更高,而虚拟物体在增强虚拟现实中的显著性更高。这表明,视觉保真度可能会增加场景的感知负荷。在增强虚拟环境中,分心正向ERP和固定模式的振幅减小,支持了分心抑制和搜索效率的提高。我们讨论了基于生理交互输入的混合现实自适应系统的设计意义。
{"title":"Searching Across Realities: Investigating ERPs and Eye-Tracking Correlates of Visual Search in Mixed Reality","authors":"Francesco Chiossi;Ines Trautmannsheimer;Changkun Ou;Uwe Gruenefeld;Sven Mayer","doi":"10.1109/TVCG.2024.3456172","DOIUrl":"10.1109/TVCG.2024.3456172","url":null,"abstract":"Mixed Reality allows us to integrate virtual and physical content into users' environments seamlessly. Yet, how this fusion affects perceptual and cognitive resources and our ability to find virtual or physical objects remains uncertain. Displaying virtual and physical information simultaneously might lead to divided attention and increased visual complexity, impacting users' visual processing, performance, and workload. In a visual search task, we asked participants to locate virtual and physical objects in Augmented Reality and Augmented Virtuality to understand the effects on performance. We evaluated search efficiency and attention allocation for virtual and physical objects using event-related potentials, fixation and saccade metrics, and behavioral measures. We found that users were more efficient in identifying objects in Augmented Virtuality, while virtual objects gained saliency in Augmented Virtuality. This suggests that visual fidelity might increase the perceptual load of the scene. Reduced amplitude in distractor positivity ERP, and fixation patterns supported improved distractor suppression and search efficiency in Augmented Virtuality. We discuss design implications for mixed reality adaptive systems based on physiological inputs for interaction.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"6997-7007"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VPRF: Visual Perceptual Radiance Fields for Foveated Image Synthesis VPRF:用于有焦点图像合成的视觉感知辐射场
Pub Date : 2024-09-11 DOI: 10.1109/TVCG.2024.3456184
Zijun Wang;Jian Wu;Runze Fan;Wei Ke;Lili Wang
Neural radiance fields (NeRF) has achieved revolutionary breakthrough in the novel view synthesis task for complex 3D scenes. However, this new paradigm struggles to meet the requirements for real-time rendering and high perceptual quality in virtual reality. In this paper, we propose VPRF, a novel visual perceptual based radiance fields representation method, which for the first time integrates the visual acuity and contrast sensitivity models of human visual system (HVS) into the radiance field rendering framework. Initially, we encode both the appearance and visual sensitivity information of the scene into our radiance field representation. Then, we propose a visual perceptual sampling strategy, allocating computational resources according to the HVS sensitivity of different regions. Finally, we propose a sampling weight-constrained training scheme to ensure the effectiveness of our sampling strategy and improve the representation of the radiance field based on the scene content. Experimental results demonstrate that our method renders more efficiently, with higher PSNR and SSIM in the foveal and salient regions compared to the state-of-the-art FoV-NeRF. The results of the user study confirm that our rendering results exhibit high-fidelity visual perception.
神经辐射场(NeRF)在复杂三维场景的新颖视图合成任务中取得了革命性的突破。然而,这一新模式难以满足虚拟现实中实时渲染和高感知质量的要求。在本文中,我们提出了基于视觉感知的新型辐射场表示方法 VPRF,首次将人类视觉系统(HVS)的视觉敏锐度和对比敏感度模型集成到辐射场渲染框架中。首先,我们将场景的外观和视觉灵敏度信息编码到我们的辐射场表示法中。然后,我们提出一种视觉感知采样策略,根据不同区域的 HVS 敏感度分配计算资源。最后,我们提出了一种采样权重受限的训练方案,以确保采样策略的有效性,并根据场景内容改进辐射场的表示。实验结果表明,与最先进的 FoV-NeRF 相比,我们的方法渲染效率更高,在眼窝和突出区域具有更高的 PSNR 和 SSIM。用户研究结果证实,我们的渲染结果显示出高保真的视觉感知。
{"title":"VPRF: Visual Perceptual Radiance Fields for Foveated Image Synthesis","authors":"Zijun Wang;Jian Wu;Runze Fan;Wei Ke;Lili Wang","doi":"10.1109/TVCG.2024.3456184","DOIUrl":"10.1109/TVCG.2024.3456184","url":null,"abstract":"Neural radiance fields (NeRF) has achieved revolutionary breakthrough in the novel view synthesis task for complex 3D scenes. However, this new paradigm struggles to meet the requirements for real-time rendering and high perceptual quality in virtual reality. In this paper, we propose VPRF, a novel visual perceptual based radiance fields representation method, which for the first time integrates the visual acuity and contrast sensitivity models of human visual system (HVS) into the radiance field rendering framework. Initially, we encode both the appearance and visual sensitivity information of the scene into our radiance field representation. Then, we propose a visual perceptual sampling strategy, allocating computational resources according to the HVS sensitivity of different regions. Finally, we propose a sampling weight-constrained training scheme to ensure the effectiveness of our sampling strategy and improve the representation of the radiance field based on the scene content. Experimental results demonstrate that our method renders more efficiently, with higher PSNR and SSIM in the foveal and salient regions compared to the state-of-the-art FoV-NeRF. The results of the user study confirm that our rendering results exhibit high-fidelity visual perception.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7183-7192"},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings. PUREsuggest:基于引文的文献检索和可视化探索与关键词控制排名。
Pub Date : 2024-09-10 DOI: 10.1109/TVCG.2024.3456199
Fabian Beck

Citations allow quickly identifying related research. If multiple publications are selected as seeds, specifc suggestions for related literature can be made based on the number of incoming and outgoing citation links to this selection. Interactively adding recommended publications to the selection refnes the next suggestion and incrementally builds a relevant collection of publications. Following this approach, the paper presents a search and foraging approach, PUREsuggest, which combines citation-based suggestions with augmented visualizations of the citation network. The focus and novelty of the approach is, frst, the transparency of how the rankings are explained visually and, second, that the process can be steered through user-defned keywords, which refect topics of interests. The system can be used to build new literature collections, to update and assess existing ones, as well as to use the collected literature for identifying relevant experts in the feld. We evaluated the recommendation approach through simulated sessions and performed a user study investigating search strategies and usage patterns supported by the interface.

通过引用可以快速识别相关研究。如果选择了多个出版物作为种子选手,就可以根据与该选题相关的传入和传出引文链接的数量,提出相关文献的具体建议。交互式地将推荐的出版物添加到选区中,可以重新生成下一个建议,并逐步建立相关的出版物集。按照这种方法,本文介绍了一种搜索和觅食方法--PUREsuggest,它将基于引文的建议与引文网络的增强可视化相结合。该方法的重点和新颖之处在于:第一,以可视化方式解释排名的透明度;第二,可以通过用户定义的关键词来引导这一过程,这些关键词反映了用户感兴趣的主题。该系统可用于建立新的文献集,更新和评估现有的文献集,以及利用收集的文献来识别该领域的相关专家。我们通过模拟会话对推荐方法进行了评估,并对界面支持的搜索策略和使用模式进行了用户研究。
{"title":"PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings.","authors":"Fabian Beck","doi":"10.1109/TVCG.2024.3456199","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456199","url":null,"abstract":"<p><p>Citations allow quickly identifying related research. If multiple publications are selected as seeds, specifc suggestions for related literature can be made based on the number of incoming and outgoing citation links to this selection. Interactively adding recommended publications to the selection refnes the next suggestion and incrementally builds a relevant collection of publications. Following this approach, the paper presents a search and foraging approach, PUREsuggest, which combines citation-based suggestions with augmented visualizations of the citation network. The focus and novelty of the approach is, frst, the transparency of how the rankings are explained visually and, second, that the process can be steered through user-defned keywords, which refect topics of interests. The system can be used to build new literature collections, to update and assess existing ones, as well as to use the collected literature for identifying relevant experts in the feld. We evaluated the recommendation approach through simulated sessions and performed a user study investigating search strategies and usage patterns supported by the interface.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CSLens: Towards Better Deploying Charging Stations via Visual Analytics -- A Coupled Networks Perspective. CSLens:通过可视化分析更好地部署充电站--耦合网络的视角。
Pub Date : 2024-09-10 DOI: 10.1109/TVCG.2024.3456392
Yutian Zhang, Liwen Xu, Shaocong Tao, Quanxue Guan, Quan Li, Haipeng Zeng

In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens's potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.

近年来,全球电动汽车(EV)的采用率激增,促使充电站的安装量也相应增加。这种激增凸显了加快充电基础设施部署的重要性。因此,学术界和工业界都致力于解决充电站位置问题(CSLP),以简化这一过程。然而,解决 CSLP 问题的现有算法受到了限制性假设和计算开销的阻碍,导致缺乏对时空维度的全面评估。因此,这些算法的实际可行性受到了限制。此外,充电站的布置会对道路网络和电网产生重大影响,因此有必要全面评估其部署后对这些互连网络的潜在影响。在本研究中,我们提出了 CSLens,这是一个可视化分析系统,旨在从交通和电力网络耦合的角度为充电站部署决策提供信息。CSLens 提供多种可视化和互动功能,使用户能够深入了解现有充电站布局,探索替代部署方案,并评估其影响。为了验证 CSLens 的功效,我们进行了两项案例研究,并与领域专家进行了访谈。通过这些努力,我们证实了 CSLens 在加强充电站部署决策过程中的可用性和实用性。我们的研究结果强调了 CSLens 在应对充电基础设施规划的复杂性方面作为宝贵资产的潜力。
{"title":"CSLens: Towards Better Deploying Charging Stations via Visual Analytics -- A Coupled Networks Perspective.","authors":"Yutian Zhang, Liwen Xu, Shaocong Tao, Quanxue Guan, Quan Li, Haipeng Zeng","doi":"10.1109/TVCG.2024.3456392","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456392","url":null,"abstract":"<p><p>In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens's potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tasks Reflected in the Eyes: Egocentric Gaze-Aware Visual Task Type Recognition in Virtual Reality 反映在眼睛中的任务:虚拟现实中以自我为中心的凝视感知视觉任务类型识别。
Pub Date : 2024-09-10 DOI: 10.1109/TVCG.2024.3456164
Zhimin Wang;Feng Lu
With eye tracking finding widespread utility in augmented reality and virtual reality headsets, eye gaze has the potential to recognize users' visual tasks and adaptively adjust virtual content displays, thereby enhancing the intelligence of these headsets. However, current studies on visual task recognition often focus on scene-specific tasks, like copying tasks for office environments, which lack applicability to new scenarios, e.g., museums. In this paper, we propose four scene-agnostic task types for facilitating task type recognition across a broader range of scenarios. We present a new dataset that includes eye and head movement data recorded from 20 participants while they engaged in four task types across 15 360-degree VR videos. Using this dataset, we propose an egocentric gaze-aware task type recognition method, TRCLP, which achieves promising results. Additionally, we illustrate the practical applications of task type recognition with three examples. Our work offers valuable insights for content developers in designing task-aware intelligent applications. Our dataset and source code are available at zhimin-wang.github.io/TaskTypeRecognition.html.
随着眼球跟踪技术在增强现实和虚拟现实头戴设备中的广泛应用,眼球凝视技术有望识别用户的视觉任务,并自适应地调整虚拟内容显示,从而提高这些头戴设备的智能化程度。然而,目前有关视觉任务识别的研究往往侧重于特定场景的任务,如办公室环境中的复印任务,缺乏对博物馆等新场景的适用性。在本文中,我们提出了四种与场景无关的任务类型,以便在更广泛的场景中促进任务类型的识别。我们展示了一个新的数据集,其中包括 20 名参与者在 15 个 360 度 VR 视频中参与四种任务类型时记录的眼球和头部运动数据。利用该数据集,我们提出了一种以自我为中心的凝视感知任务类型识别方法 TRCLP,该方法取得了可喜的成果。此外,我们还通过三个实例说明了任务类型识别的实际应用。我们的工作为内容开发人员设计任务感知智能应用程序提供了宝贵的见解。我们的数据集和源代码见 zhimin-wang.github.io/TaskTypeRecognition.html。
{"title":"Tasks Reflected in the Eyes: Egocentric Gaze-Aware Visual Task Type Recognition in Virtual Reality","authors":"Zhimin Wang;Feng Lu","doi":"10.1109/TVCG.2024.3456164","DOIUrl":"10.1109/TVCG.2024.3456164","url":null,"abstract":"With eye tracking finding widespread utility in augmented reality and virtual reality headsets, eye gaze has the potential to recognize users' visual tasks and adaptively adjust virtual content displays, thereby enhancing the intelligence of these headsets. However, current studies on visual task recognition often focus on scene-specific tasks, like copying tasks for office environments, which lack applicability to new scenarios, e.g., museums. In this paper, we propose four scene-agnostic task types for facilitating task type recognition across a broader range of scenarios. We present a new dataset that includes eye and head movement data recorded from 20 participants while they engaged in four task types across 15 360-degree VR videos. Using this dataset, we propose an egocentric gaze-aware task type recognition method, TRCLP, which achieves promising results. Additionally, we illustrate the practical applications of task type recognition with three examples. Our work offers valuable insights for content developers in designing task-aware intelligent applications. Our dataset and source code are available at zhimin-wang.github.io/TaskTypeRecognition.html.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7277-7287"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frankenstein's Monster in the Metaverse: User Interaction With Customized Virtual Agents 元宇宙中的科学怪人:用户与定制虚拟代理的互动
Pub Date : 2024-09-10 DOI: 10.1109/TVCG.2024.3456205
Susanne Schmidt;Ipek Köysürenbars;Frank Steinicke
Enabled by the latest achievements in artificial intelligence (AI), computer graphics as well as virtual, augmented, and mixed reality (VR/AR/MR), virtual agents are increasingly resembling humans in both their appearance and intelligent behavior. This results in enormous potential for agents to support users in their daily lives, for example in customer service, healthcare, education or the envisioned all-encompassing metaverse. Today's technology would allow users to customize their conversation partners in the metaverse - as opposed to reality - according to their preferences, potentially improving the user experience. On the other hand, there is little research on how reshaping the head of a communication partner might affect the immediate interaction with them. In this paper, we investigate the user requirements for and the effects of agent customization. In a two-stage user study ($N=30$), we collected both self-reported evaluations (e.g., intrinsic motivation) and interaction metrics (e.g., interaction duration and number of tried out items) for the process of agent customization itself as well as data on how users perceived the subsequent human-agent interaction in VR. Our results indicate that users only wish to have full customization for agents in their personal social circle, while for general services, a selection or even a definite assignment of pre-configured agents is sufficient. When customization is offered, attributes such as gender, clothing or hair are subjectively more relevant to users than facial features such as skin or eye color. Although the customization of human interaction partners is beyond our control, customization of virtual agents significantly increases perceived social presence as well as rapport and trust. Further findings on user motivation and agent diversity are discussed in the paper.
在人工智能(AI)、计算机图形学以及虚拟现实、增强现实和混合现实(VR/AR/MR)等领域最新成果的推动下,虚拟代理在外观和智能行为上都越来越像人类。这使得虚拟代理在日常生活中为用户提供支持的潜力巨大,例如在客户服务、医疗保健、教育或设想中的包罗万象的元宇宙中。如今的技术可以让用户根据自己的喜好,在元宇宙(与现实相反)中定制他们的对话伙伴,从而改善用户体验。另一方面,关于重塑交流伙伴的头像会如何影响与他们的即时互动的研究却很少。在本文中,我们研究了用户对代理定制的需求及其影响。在一项分两个阶段进行的用户研究($N=30$)中,我们收集了用户对代理定制过程本身的自我报告评价(如内在动机)和交互指标(如交互持续时间和尝试项目数量),以及用户对随后在 VR 中进行的人机交互的感知数据。我们的研究结果表明,用户只希望对其个人社交圈中的代理进行完全定制,而对于一般服务,选择甚至确定分配预先配置好的代理就足够了。在提供定制服务时,性别、服装或头发等属性比皮肤或眼睛颜色等面部特征与用户的主观相关性更大。虽然人类交互伙伴的定制不是我们所能控制的,但虚拟代理的定制却能显著提高感知到的社交存在感以及融洽度和信任度。本文还讨论了有关用户动机和代理多样性的进一步研究结果。
{"title":"Frankenstein's Monster in the Metaverse: User Interaction With Customized Virtual Agents","authors":"Susanne Schmidt;Ipek Köysürenbars;Frank Steinicke","doi":"10.1109/TVCG.2024.3456205","DOIUrl":"10.1109/TVCG.2024.3456205","url":null,"abstract":"Enabled by the latest achievements in artificial intelligence (AI), computer graphics as well as virtual, augmented, and mixed reality (VR/AR/MR), virtual agents are increasingly resembling humans in both their appearance and intelligent behavior. This results in enormous potential for agents to support users in their daily lives, for example in customer service, healthcare, education or the envisioned all-encompassing metaverse. Today's technology would allow users to customize their conversation partners in the metaverse - as opposed to reality - according to their preferences, potentially improving the user experience. On the other hand, there is little research on how reshaping the head of a communication partner might affect the immediate interaction with them. In this paper, we investigate the user requirements for and the effects of agent customization. In a two-stage user study ($N=30$), we collected both self-reported evaluations (e.g., intrinsic motivation) and interaction metrics (e.g., interaction duration and number of tried out items) for the process of agent customization itself as well as data on how users perceived the subsequent human-agent interaction in VR. Our results indicate that users only wish to have full customization for agents in their personal social circle, while for general services, a selection or even a definite assignment of pre-configured agents is sufficient. When customization is offered, attributes such as gender, clothing or hair are subjectively more relevant to users than facial features such as skin or eye color. Although the customization of human interaction partners is beyond our control, customization of virtual agents significantly increases perceived social presence as well as rapport and trust. Further findings on user motivation and agent diversity are discussed in the paper.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7162-7171"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1