首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
Generating Coherent Visualization Sequences for Multivariate Data by Causal Graph Traversal 基于因果图遍历的多变量数据连贯可视化序列生成。
IF 6.5 Pub Date : 2026-01-22 DOI: 10.1109/TVCG.2026.3656952
Puripant Ruchikachorn;Darius Coelho;Jun Wang;Kristina Striegnitz;Klaus Mueller
Multivariate data contain an abundance of information and many techniques have been proposed to allow humans to navigate this information in an ordered fashion. For this work, we focus on methods that seek to convey multivariate data as a collection of bivariate scatterplots or parallel coordinates plots. Presenting multivariate data in this way requires a regime that determines in what order the bivariate scatterplots are presented or in what order the parallel coordinate axes are arranged. We refer to this order as a visualization sequence. Common techniques utilize standard statistical metrics like correlation, similarity or consistency. We expand on the family of statistical metrics by incorporating the rigidity of causal relationships. To capture these relationships, we first derive a causal graph from the data and then allow users to select from several semantic traversal schemes to derive the respective chart sequence. We tested the sequences with a crowd-sourced user study and a user interview to confirm that the causality-informed visualization sequences help viewers to better grasp the causal relationships that exist in the data, as opposed to sequences derived from correlations or randomization alone.
多元数据包含丰富的信息,并且已经提出了许多技术来允许人们以有序的方式浏览这些信息。对于这项工作,我们专注于寻求将多变量数据作为二元散点图或平行坐标图的集合传递的方法。以这种方式呈现多变量数据需要一种制度,该制度决定以何种顺序呈现二元散点图或以何种顺序排列平行坐标轴。我们把这个顺序称为可视化序列。常用的技术利用标准的统计指标,如相关性、相似性或一致性。我们通过纳入因果关系的刚性来扩展统计度量的家族。为了捕获这些关系,我们首先从数据中导出因果图,然后允许用户从几个语义遍历方案中进行选择,以导出各自的图表序列。我们通过众包用户研究和用户访谈测试了这些序列,以确认因果关系知情的可视化序列有助于观众更好地掌握数据中存在的因果关系,而不是仅从相关性或随机化中获得的序列。
{"title":"Generating Coherent Visualization Sequences for Multivariate Data by Causal Graph Traversal","authors":"Puripant Ruchikachorn;Darius Coelho;Jun Wang;Kristina Striegnitz;Klaus Mueller","doi":"10.1109/TVCG.2026.3656952","DOIUrl":"10.1109/TVCG.2026.3656952","url":null,"abstract":"Multivariate data contain an abundance of information and many techniques have been proposed to allow humans to navigate this information in an ordered fashion. For this work, we focus on methods that seek to convey multivariate data as a collection of bivariate scatterplots or parallel coordinates plots. Presenting multivariate data in this way requires a regime that determines in what order the bivariate scatterplots are presented or in what order the parallel coordinate axes are arranged. We refer to this order as a <italic>visualization sequence</i>. Common techniques utilize standard statistical metrics like correlation, similarity or consistency. We expand on the family of statistical metrics by incorporating the rigidity of causal relationships. To capture these relationships, we first derive a causal graph from the data and then allow users to select from several semantic traversal schemes to derive the respective chart sequence. We tested the sequences with a crowd-sourced user study and a user interview to confirm that the causality-informed visualization sequences help viewers to better grasp the causal relationships that exist in the data, as opposed to sequences derived from correlations or randomization alone.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"32 3","pages":"2812-2824"},"PeriodicalIF":6.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146032378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Few-Shot Learning Framework for Time-Varying Scientific Data Generation via Conditional Diffusion Model 基于条件扩散模型的时变科学数据生成的少镜头学习框架。
IF 6.5 Pub Date : 2026-01-22 DOI: 10.1109/TVCG.2026.3656934
Jun Han
A key factor in successfully achieving remarkable performance for deep learning models is the availability of large amounts of data. However, in scientific visualization, providing such extensive volumetric data is often infeasible due to the high computational cost of simulations and the challenges of data storage. To address this data sparsity issue in model training, we propose a few-shot learning framework that leverages only few training samples (e.g., 1, 3, or 5) to ensure both generalization capability and performance through a conditional diffusion model. Our approach consists of two stages: the forward process and the reverse process. In the forward process, we inject noise at various levels into the few samples. In the reverse process, we design a time-aware UNet that iteratively learns to denoise the noisy data. Additionally, we introduce a noise-aware loss function that dynamically adjusts optimization weights based on the noise levels in the training data. Our method demonstrates consistent and robust performance, regardless of the selection method for the few training samples. Furthermore, it achieves superior results in both quantitative and qualitative evaluations compared to state-of-the-art solutions across three scientific visualization tasks: spatial super-resolution, temporal super-resolution, and variable translation.
成功实现深度学习模型卓越性能的一个关键因素是大量数据的可用性。然而,在科学可视化中,由于模拟的高计算成本和数据存储的挑战,提供如此广泛的体积数据往往是不可行的。为了解决模型训练中的数据稀疏性问题,我们提出了一个少量学习框架,该框架仅利用少量训练样本(例如,1、3或5)来确保通过条件扩散模型的泛化能力和性能。我们的方法包括两个阶段:正向过程和反向过程。在正演过程中,我们将不同程度的噪声注入到少数样本中。在相反的过程中,我们设计了一个时间感知的UNet,迭代学习去噪噪声数据。此外,我们引入了一个噪声感知损失函数,该函数根据训练数据中的噪声水平动态调整优化权重。无论对少数训练样本的选择方法如何,我们的方法都表现出一致和稳健的性能。此外,在三个科学可视化任务(空间超分辨率、时间超分辨率和变量转换)中,与最先进的解决方案相比,它在定量和定性评估方面都取得了更好的结果。
{"title":"A Few-Shot Learning Framework for Time-Varying Scientific Data Generation via Conditional Diffusion Model","authors":"Jun Han","doi":"10.1109/TVCG.2026.3656934","DOIUrl":"10.1109/TVCG.2026.3656934","url":null,"abstract":"A key factor in successfully achieving remarkable performance for deep learning models is the availability of large amounts of data. However, in scientific visualization, providing such extensive volumetric data is often infeasible due to the high computational cost of simulations and the challenges of data storage. To address this data sparsity issue in model training, we propose a few-shot learning framework that leverages only few training samples (e.g., 1, 3, or 5) to ensure both generalization capability and performance through a conditional diffusion model. Our approach consists of two stages: the forward process and the reverse process. In the forward process, we inject noise at various levels into the few samples. In the reverse process, we design a time-aware UNet that iteratively learns to denoise the noisy data. Additionally, we introduce a noise-aware loss function that dynamically adjusts optimization weights based on the noise levels in the training data. Our method demonstrates consistent and robust performance, regardless of the selection method for the few training samples. Furthermore, it achieves superior results in both quantitative and qualitative evaluations compared to state-of-the-art solutions across three scientific visualization tasks: spatial super-resolution, temporal super-resolution, and variable translation.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"32 3","pages":"2825-2837"},"PeriodicalIF":6.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146032332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hie4DGS: Hierarchical 4D Gaussian Splatting from Monocular Dynamic Video. Hie4DGS:从单目动态视频分层四维高斯飞溅。
IF 6.5 Pub Date : 2026-01-21 DOI: 10.1109/TVCG.2026.3656737
Kai Cheng, Kaizhi Yang, Xiaoxiao Long, Xiaoyang Guo, Xuejin Chen

Monocular dynamic video reconstruction is a typical ill-posed problem due to the limited observations and complex 3D motions. Despite the recent advances in dynamic 3D Gaussian splatting techniques, most of them still struggle with the monocular setting, since they heavily rely on geometric cues from multiple cameras or ignore the structural coherence among the optimized 3D Gaussains. To address this, we propose Hie4DGS, a novel hierarchical structure representation to model the complex dynamic motions from monocular dynamic videos. Specifically, we decompose the motions of a dynamic scene into groups of multiple structure granularities and progressively compose them to derive the motion of each 3D Gaussian. Building on this representation, we leverage hierarchical semantic segmentation to group Gaussians and initialize their motion using depth and tracking priors within each group. Additionally, we introduce a structure rendering loss that enforces consistency between the learned motion structure and semantic priors, further reducing motion ambiguity. Compared to the state-of-the-art dynamic Gaussian methods, we achieve significant improvement in rendering quality on monocular video datasets featuring complex real-world motions.

单目动态视频重建是一个典型的不适定问题,由于观测范围有限和三维运动复杂。尽管近年来动态三维高斯溅射技术取得了进展,但由于它们严重依赖于来自多个摄像机的几何线索,或者忽略了优化的三维高斯溅射之间的结构相干性,大多数技术仍然在单眼环境中挣扎。为了解决这个问题,我们提出了一种新的层次结构表示Hie4DGS来模拟单目动态视频中的复杂动态运动。具体来说,我们将动态场景的运动分解成多个结构粒度的组,并逐步组合它们以导出每个三维高斯的运动。在此表示的基础上,我们利用分层语义分割来分组高斯,并在每个组中使用深度和跟踪先验来初始化它们的运动。此外,我们引入了一种结构渲染损失,以加强学习到的运动结构和语义先验之间的一致性,进一步减少运动歧义。与最先进的动态高斯方法相比,我们在具有复杂现实世界运动的单目视频数据集的渲染质量方面取得了显着改善。
{"title":"Hie4DGS: Hierarchical 4D Gaussian Splatting from Monocular Dynamic Video.","authors":"Kai Cheng, Kaizhi Yang, Xiaoxiao Long, Xiaoyang Guo, Xuejin Chen","doi":"10.1109/TVCG.2026.3656737","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3656737","url":null,"abstract":"<p><p>Monocular dynamic video reconstruction is a typical ill-posed problem due to the limited observations and complex 3D motions. Despite the recent advances in dynamic 3D Gaussian splatting techniques, most of them still struggle with the monocular setting, since they heavily rely on geometric cues from multiple cameras or ignore the structural coherence among the optimized 3D Gaussains. To address this, we propose Hie4DGS, a novel hierarchical structure representation to model the complex dynamic motions from monocular dynamic videos. Specifically, we decompose the motions of a dynamic scene into groups of multiple structure granularities and progressively compose them to derive the motion of each 3D Gaussian. Building on this representation, we leverage hierarchical semantic segmentation to group Gaussians and initialize their motion using depth and tracking priors within each group. Additionally, we introduce a structure rendering loss that enforces consistency between the learned motion structure and semantic priors, further reducing motion ambiguity. Compared to the state-of-the-art dynamic Gaussian methods, we achieve significant improvement in rendering quality on monocular video datasets featuring complex real-world motions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146021184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AquaHaptics: Hand-based Multimodal Haptic Interactions for Immersive Virtual Underwater Experience. 水触觉:基于手的多模态触觉交互沉浸式虚拟水下体验。
IF 6.5 Pub Date : 2026-01-19 DOI: 10.1109/TVCG.2026.3652832
Soyeong Yang, Sang Ho Yoon

With the advancement of haptic interfaces, recent studies have focused on enabling detailed haptic experiences in virtual reality (VR), such as fluid-haptic interaction. However, rendering forces from fluid contact often causes a high-cost computation. Given that motion-induced fluid feedback is crucial to the overall experience, we focus on hand-perceivable forces to enhance underwater haptic sensation by achieving high-fidelity rendering while considering human perceptual capabilities. We present a new multimodal (tactile and kinesthetic) haptic rendering pipeline. Here, we employ drag and added mass forces by dynamically adapting to the user's hand movement and posture with pneumatic-based haptic gloves. We defined decaying and damping effects to indicate fluid properties caused by inertia and confirmed their significant perceptual impacts compared to using only physics-based equations in a perception study. By modulating pressure variations, we reproduced fluid smoothness via exponential tactile deflation and light fluid mass via linear kinesthetic feedback. Our pipeline enabled richer and more immersive VR underwater experiences by accounting for precise hand regions and motion diversity.

随着触觉界面的发展,近年来的研究重点是在虚拟现实(VR)中实现详细的触觉体验,如流体触觉交互。然而,流体接触产生的渲染力通常会导致高计算成本。鉴于运动诱导的流体反馈对整体体验至关重要,我们将重点放在手可感知的力量上,通过实现高保真渲染来增强水下触觉,同时考虑到人类的感知能力。我们提出了一种新的多模态(触觉和动觉)触觉渲染管道。在这里,我们通过动态适应用户的手部运动和姿势,使用基于气动的触觉手套来拖动和增加质量力。我们定义了衰减和阻尼效应,以表明由惯性引起的流体特性,并证实了与在感知研究中仅使用基于物理的方程相比,它们对感知的显著影响。通过调节压力变化,我们通过指数触觉收缩再现了流体的平滑性,并通过线性动觉反馈再现了轻流体质量。我们的管道通过考虑精确的手部区域和运动多样性,实现了更丰富、更身临其境的VR水下体验。
{"title":"AquaHaptics: Hand-based Multimodal Haptic Interactions for Immersive Virtual Underwater Experience.","authors":"Soyeong Yang, Sang Ho Yoon","doi":"10.1109/TVCG.2026.3652832","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3652832","url":null,"abstract":"<p><p>With the advancement of haptic interfaces, recent studies have focused on enabling detailed haptic experiences in virtual reality (VR), such as fluid-haptic interaction. However, rendering forces from fluid contact often causes a high-cost computation. Given that motion-induced fluid feedback is crucial to the overall experience, we focus on hand-perceivable forces to enhance underwater haptic sensation by achieving high-fidelity rendering while considering human perceptual capabilities. We present a new multimodal (tactile and kinesthetic) haptic rendering pipeline. Here, we employ drag and added mass forces by dynamically adapting to the user's hand movement and posture with pneumatic-based haptic gloves. We defined decaying and damping effects to indicate fluid properties caused by inertia and confirmed their significant perceptual impacts compared to using only physics-based equations in a perception study. By modulating pressure variations, we reproduced fluid smoothness via exponential tactile deflation and light fluid mass via linear kinesthetic feedback. Our pipeline enabled richer and more immersive VR underwater experiences by accounting for precise hand regions and motion diversity.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146004887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Make-Your-Anchor+: Temporal Consistent 2D Avatar Generation via Video Diffusion Prior. 制作你的锚+:时间一致的2D角色生成通过视频扩散先验。
IF 6.5 Pub Date : 2026-01-19 DOI: 10.1109/TVCG.2026.3655478
Ziyao Huang, Fan Tang, Juan Cao, Yong Zhang, Xiaodong Cun, Yihang Bo, Jintao Li, Tong-Yee Lee

Despite the remarkable process of talking-head-based avatar-creating solutions, directly generating anchor-style videos with full-body motions remains challenging. In this study, we propose Make-Your-Anchor+, a novel system necessitating only a one-minute video clip of an individual for training, subsequently enabling the automatic generation of anchor-style videos with precise torso and hand movements. Specifically, we finetune a proposed structure-guided diffusion model on input video to render 3D mesh conditions into human appearances. We adopt a two-stage training strategy for the diffusion model, effectively mapping movements with specific appearances to create digital avatars for online streamers, live shopping hosts, and other applications. To produce arbitrary long temporal video, we extract human motion information from video diffusion prior by adapting the frame-wise diffusion model to pretrained video diffusion weights with lower cost, and a simple yet effective batch-overlapped temporal denoising module is proposed to bypass the constraints on video length during inference. Finally, a novel identity-specific face enhancement module is introduced to improve the visual quality of facial regions in the output videos. Comparative experiments demonstrate the system's effectiveness and superiority in visual quality, temporal coherence, and identity preservation, outperforming SOTA diffusion/non-diffusion methods.

尽管基于说话头的虚拟形象创建解决方案具有显著的过程,但直接生成具有全身动作的主播式视频仍然具有挑战性。在这项研究中,我们提出了Make-Your-Anchor+,这是一个新颖的系统,只需要一个人的一分钟视频片段进行训练,随后就可以自动生成具有精确躯干和手部运动的主播式视频。具体来说,我们对输入视频上提出的结构引导扩散模型进行了微调,以将3D网格条件渲染为人类外观。我们采用扩散模型的两阶段训练策略,有效地映射具有特定外观的运动,为在线主播,现场购物主持人和其他应用程序创建数字化身。为了产生任意长时间视频,我们将逐帧扩散模型以较低的成本适应预训练的视频扩散权,从视频扩散先验中提取人体运动信息,并提出了一种简单有效的批量重叠时间去噪模块,以绕过推理过程中对视频长度的限制。最后,提出了一种新的人脸识别增强模块,以提高输出视频中人脸区域的视觉质量。对比实验证明了该系统在视觉质量、时间相干性和身份保存方面的有效性和优越性,优于SOTA扩散/非扩散方法。
{"title":"Make-Your-Anchor+: Temporal Consistent 2D Avatar Generation via Video Diffusion Prior.","authors":"Ziyao Huang, Fan Tang, Juan Cao, Yong Zhang, Xiaodong Cun, Yihang Bo, Jintao Li, Tong-Yee Lee","doi":"10.1109/TVCG.2026.3655478","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3655478","url":null,"abstract":"<p><p>Despite the remarkable process of talking-head-based avatar-creating solutions, directly generating anchor-style videos with full-body motions remains challenging. In this study, we propose Make-Your-Anchor+, a novel system necessitating only a one-minute video clip of an individual for training, subsequently enabling the automatic generation of anchor-style videos with precise torso and hand movements. Specifically, we finetune a proposed structure-guided diffusion model on input video to render 3D mesh conditions into human appearances. We adopt a two-stage training strategy for the diffusion model, effectively mapping movements with specific appearances to create digital avatars for online streamers, live shopping hosts, and other applications. To produce arbitrary long temporal video, we extract human motion information from video diffusion prior by adapting the frame-wise diffusion model to pretrained video diffusion weights with lower cost, and a simple yet effective batch-overlapped temporal denoising module is proposed to bypass the constraints on video length during inference. Finally, a novel identity-specific face enhancement module is introduced to improve the visual quality of facial regions in the output videos. Comparative experiments demonstrate the system's effectiveness and superiority in visual quality, temporal coherence, and identity preservation, outperforming SOTA diffusion/non-diffusion methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146004864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ReVISit 2: A Full Experiment Life Cycle User Study Framework. 重访2:完整的实验生命周期用户研究框架。
IF 6.5 Pub Date : 2026-01-16 DOI: 10.1109/TVCG.2025.3633896
Zach Cutler, Jack Wilburn, Hilson Shrestha, Yiren Ding, Brian Bollen, Khandaker Abrar Nadib, Tingying He, Andrew McNutt, Lane Harrison, Alexander Lex

Online user studies of visualizations, visual encodings, and interaction techniques are ubiquitous in visualization research. Yet, designing, conducting, and analyzing studies effectively is still a major burden. Although various packages support such user studies, most solutions address only facets of the experiment life cycle, make reproducibility difficult, or do not cater to nuanced study designs or interactions. We introduce reVISit 2, a software framework that supports visualization researchers at all stages of designing and conducting browser-based user studies. ReVISit supports researchers in the design, debug & pilot, data collection, analysis, and dissemination experiment phases by providing both technical affordances (such as replay of participant interactions) and sociotechnical aids (such as a mindfully maintained community of support). It is a proven system that can be (and has been) used in publication-quality studies-which we demonstrate through a series of experimental replications. We reflect on the design of the system via interviews and an analysis of its technical dimensions. Through this work, we seek to elevate the ease with which studies are conducted, improve the reproducibility of studies within our community, and support the construction of advanced interactive studies.

在线用户对可视化、可视化编码和交互技术的研究在可视化研究中无处不在。然而,有效地设计、实施和分析研究仍然是一个主要的负担。尽管有各种各样的软件包支持这样的用户研究,但大多数解决方案只涉及实验生命周期的各个方面,使重现性变得困难,或者不满足细致的研究设计或交互。我们介绍了一个软件框架,它支持可视化研究人员在设计和实施基于浏览器的用户研究的所有阶段。通过提供技术支持(如参与者互动的重播)和社会技术辅助(如精心维护的支持社区),重访在设计、调试和试验、数据收集、分析和传播实验阶段为研究人员提供支持。这是一个经过验证的系统,可以(并且已经)用于发表质量的研究——我们通过一系列实验重复来证明这一点。我们通过访谈和对其技术层面的分析来反思系统的设计。通过这项工作,我们寻求提高研究进行的便利性,提高我们社区研究的可重复性,并支持先进的互动研究的建设。
{"title":"ReVISit 2: A Full Experiment Life Cycle User Study Framework.","authors":"Zach Cutler, Jack Wilburn, Hilson Shrestha, Yiren Ding, Brian Bollen, Khandaker Abrar Nadib, Tingying He, Andrew McNutt, Lane Harrison, Alexander Lex","doi":"10.1109/TVCG.2025.3633896","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3633896","url":null,"abstract":"<p><p>Online user studies of visualizations, visual encodings, and interaction techniques are ubiquitous in visualization research. Yet, designing, conducting, and analyzing studies effectively is still a major burden. Although various packages support such user studies, most solutions address only facets of the experiment life cycle, make reproducibility difficult, or do not cater to nuanced study designs or interactions. We introduce reVISit 2, a software framework that supports visualization researchers at all stages of designing and conducting browser-based user studies. ReVISit supports researchers in the design, debug & pilot, data collection, analysis, and dissemination experiment phases by providing both technical affordances (such as replay of participant interactions) and sociotechnical aids (such as a mindfully maintained community of support). It is a proven system that can be (and has been) used in publication-quality studies-which we demonstrate through a series of experimental replications. We reflect on the design of the system via interviews and an analysis of its technical dimensions. Through this work, we seek to elevate the ease with which studies are conducted, improve the reproducibility of studies within our community, and support the construction of advanced interactive studies.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145992292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAGIC: Marching Cubes Isosurface Uncertainty Visualization for Gaussian Uncertain Data with Spatial Correlation. MAGIC:具有空间相关性的高斯不确定数据的行进立方体等值面不确定性可视化。
IF 6.5 Pub Date : 2026-01-14 DOI: 10.1109/TVCG.2026.3653244
Tushar M Athawale, Kenneth Moreland, David Pugmire, Chris R Johnson, Paul Rosen, Matthew Norman, Antigoni Georgiadou, Alireza Entezari

In this paper, we study the propagation of data uncertainty through the marching cubes algorithm for isosurface visualization for correlated uncertain data. Consideration of correlation has been shown paramount for avoiding errors in uncertainty quantification and visualization in multiple prior studies. Although the problem of isosurface uncertainty with spatial data correlation has been previously addressed, there are two major limitations to prior treatments. First, there are no analytical formulations for uncertainty quantification of isosurfaces when the data uncertainty is characterized by a Gaussian distribution with spatial correlation. Second, as a consequence of the lack of analytical formulations,existing techniques resort to a Monte Carlo sampling approach, which is expensive and difficult to integrate into visualization tools. To address these limitations, we present a closed-form framework to efficiently derive uncertainty in marching cubes level-sets for Gaussian uncertain data with spatial correlation (MAGIC). To derive closed-form solutions, we leverage the Hinkley's derivation on the ratio of Gaussian distributions. With our analytical framework, we achieve a significant speed-up and enhanced accuracy of uncertainty quantification over classical Monte Carlo methods. We further accelerate our analytical solutions using many-core processors to achieve speed-ups up to $585 times$ and integrability with production visualization tools for broader impact. We demonstrate the effectiveness of our correlation-aware uncertainty framework through experiments on meteorology, urban flow, and astrophysics simulation datasets.

本文研究了在相关不确定数据的等值面可视化中,采用行进立方体算法对数据不确定性进行传播。在先前的多项研究中,考虑相关性对于避免不确定性量化和可视化中的错误至关重要。虽然以前已经解决了空间数据相关性的等面不确定性问题,但之前的处理存在两个主要限制。首先,当数据不确定性为高斯分布且具有空间相关性时,等值面的不确定性量化没有解析公式。其次,由于缺乏分析公式,现有技术采用蒙特卡罗抽样方法,这种方法昂贵且难以集成到可视化工具中。为了解决这些限制,我们提出了一个封闭形式的框架来有效地推导具有空间相关性的高斯不确定数据(MAGIC)的行进立方水平集中的不确定性。为了推导封闭形式的解,我们利用了高斯分布比率上的欣克利推导。与经典的蒙特卡罗方法相比,我们的分析框架实现了显著的不确定度量化速度和精度的提高。我们使用多核处理器进一步加速我们的分析解决方案,以实现高达585倍的加速,并可与生产可视化工具集成,从而产生更广泛的影响。我们通过对气象、城市流和天体物理模拟数据集的实验证明了我们的相关性感知不确定性框架的有效性。
{"title":"MAGIC: Marching Cubes Isosurface Uncertainty Visualization for Gaussian Uncertain Data with Spatial Correlation.","authors":"Tushar M Athawale, Kenneth Moreland, David Pugmire, Chris R Johnson, Paul Rosen, Matthew Norman, Antigoni Georgiadou, Alireza Entezari","doi":"10.1109/TVCG.2026.3653244","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3653244","url":null,"abstract":"<p><p>In this paper, we study the propagation of data uncertainty through the marching cubes algorithm for isosurface visualization for correlated uncertain data. Consideration of correlation has been shown paramount for avoiding errors in uncertainty quantification and visualization in multiple prior studies. Although the problem of isosurface uncertainty with spatial data correlation has been previously addressed, there are two major limitations to prior treatments. First, there are no analytical formulations for uncertainty quantification of isosurfaces when the data uncertainty is characterized by a Gaussian distribution with spatial correlation. Second, as a consequence of the lack of analytical formulations,existing techniques resort to a Monte Carlo sampling approach, which is expensive and difficult to integrate into visualization tools. To address these limitations, we present a closed-form framework to efficiently derive uncertainty in marching cubes level-sets for Gaussian uncertain data with spatial correlation (MAGIC). To derive closed-form solutions, we leverage the Hinkley's derivation on the ratio of Gaussian distributions. With our analytical framework, we achieve a significant speed-up and enhanced accuracy of uncertainty quantification over classical Monte Carlo methods. We further accelerate our analytical solutions using many-core processors to achieve speed-ups up to $585 times$ and integrability with production visualization tools for broader impact. We demonstrate the effectiveness of our correlation-aware uncertainty framework through experiments on meteorology, urban flow, and astrophysics simulation datasets.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SeparateGen: Semantic Component-based 3D Character Generation from Single Images. SeparateGen:从单个图像生成基于语义组件的3D角色。
IF 6.5 Pub Date : 2026-01-12 DOI: 10.1109/TVCG.2026.3652452
Dong-Yang Li, Yi-Long Liu, Zi-Xian Liu, Yan-Pei Cao, Meng-Hao Guo, Shi-Min Hu

Creating detailed 3D characters from a single image remains challenging due to the difficulty in separating semantic components during generation. Existing methods often produce entangled meshes with poor topology, hindering downstream applications like rigging and animation. We introduce SeparateGen, a novel framework that generates high-quality 3D characters by explicitly reconstructing them as distinct semantic components (e.g., body, clothing, hair, shoes) from a single, arbitrary-pose image. SeparateGen first leverages a multi-view diffusion model to generate consistent multi-view images in a canonical Apose. Then, a novel component-aware reconstruction model, SC-LRM, conditioned on these multi-view images, adaptively decomposes and reconstructs each component with high fidelity. To train and evaluate SeparateGen, we contribute SC-Anime, the first large-scale dataset of 7,580 anime-style 3D characters with detailed component-level annotations. Extensive experiments demonstrate that SeparateGen significantly outperforms stateof- the-art methods in both reconstruction quality and multiview consistency. Furthermore, our component-based approach effectively resolves mesh entanglement issues, enabling seamless rigging and asset reuse. SeparateGen thus represents a step towards generating high-quality, application-ready 3D characters from a single image. The SC-Anime dataset and our code will be publicly released.

由于在生成过程中难以分离语义组件,因此从单个图像创建详细的3D字符仍然具有挑战性。现有的方法通常会产生纠缠网格和较差的拓扑结构,阻碍下游应用,如索具和动画。我们引入了SeparateGen,这是一个新颖的框架,通过明确地将它们从单个任意姿态的图像中重构为不同的语义组件(例如,身体,衣服,头发,鞋子)来生成高质量的3D角色。SeparateGen首先利用一个多视图扩散模型在一个规范的环境中生成一致的多视图图像。然后,以这些多视图图像为条件,建立了一种新的构件感知重构模型SC-LRM,对每个构件进行高保真度自适应分解和重构。为了训练和评估SeparateGen,我们贡献了SC-Anime,这是第一个包含7580个动画风格3D角色的大规模数据集,具有详细的组件级注释。大量的实验表明,SeparateGen在重建质量和多视图一致性方面都明显优于最先进的方法。此外,我们基于组件的方法有效地解决了网格纠缠问题,实现了无缝装配和资产重用。因此,SeparateGen代表了从单个图像生成高质量,应用就绪的3D字符的一步。SC-Anime数据集和我们的代码将公开发布。
{"title":"SeparateGen: Semantic Component-based 3D Character Generation from Single Images.","authors":"Dong-Yang Li, Yi-Long Liu, Zi-Xian Liu, Yan-Pei Cao, Meng-Hao Guo, Shi-Min Hu","doi":"10.1109/TVCG.2026.3652452","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3652452","url":null,"abstract":"<p><p>Creating detailed 3D characters from a single image remains challenging due to the difficulty in separating semantic components during generation. Existing methods often produce entangled meshes with poor topology, hindering downstream applications like rigging and animation. We introduce SeparateGen, a novel framework that generates high-quality 3D characters by explicitly reconstructing them as distinct semantic components (e.g., body, clothing, hair, shoes) from a single, arbitrary-pose image. SeparateGen first leverages a multi-view diffusion model to generate consistent multi-view images in a canonical Apose. Then, a novel component-aware reconstruction model, SC-LRM, conditioned on these multi-view images, adaptively decomposes and reconstructs each component with high fidelity. To train and evaluate SeparateGen, we contribute SC-Anime, the first large-scale dataset of 7,580 anime-style 3D characters with detailed component-level annotations. Extensive experiments demonstrate that SeparateGen significantly outperforms stateof- the-art methods in both reconstruction quality and multiview consistency. Furthermore, our component-based approach effectively resolves mesh entanglement issues, enabling seamless rigging and asset reuse. SeparateGen thus represents a step towards generating high-quality, application-ready 3D characters from a single image. The SC-Anime dataset and our code will be publicly released.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motif Simplification for BioFabric Network Visualizations: Improving Pattern Recognition and Interpretation. 生物织物网络可视化的基序简化:改进模式识别和解释。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3634266
Johannes Fuchs, Cody Dunne, Maria-Viktoria Heinle, Daniel A Keim, Sara Di Bartolomeo

Detecting and interpreting common patterns in relational data is crucial for understanding complex topological structures across various domains. These patterns, or network motifs, can often be detected algorithmically. However, visual inspection remains vital for exploring and discovering patterns. This paper focuses on presenting motifs within BioFabric network visualizations-a unique technique that opens opportunities for research on scaling to larger networks, design variations, and layout algorithms to better expose motifs. Our goal is to show how highlighting motifs can assist users in identifying and interpreting patterns in BioFabric visualizations. To this end, we leverage existing motif simplification techniques. We replace edges with glyphs representing fundamental motifs such as staircases, cliques, paths, and connector nodes. The results of our controlled experiment and usage scenarios demonstrate that motif simplification for BioFabric is useful for detecting and interpreting network patterns. Our participants were faster and more confident using the simplified view without sacrificing accuracy. The efficacy of our current motif simplification approach depends on which extant layout algorithm is used. We hope our promising findings on user performance will motivate future research on layout algorithms tailored to maximizing motif presentation. Our supplemental material is available at https://osf.io/f8s3g/?view_only=7e2df9109dfd4e6c85b89ed828320843.

检测和解释关系数据中的公共模式对于理解跨不同领域的复杂拓扑结构至关重要。这些模式或网络基序通常可以通过算法检测到。然而,视觉检查对于探索和发现模式仍然至关重要。本文着重于在BioFabric网络可视化中呈现图案,这是一种独特的技术,为研究扩展到更大的网络、设计变化和布局算法以更好地展示图案提供了机会。我们的目标是展示高亮的主题如何帮助用户识别和解释BioFabric可视化中的模式。为此,我们利用现有的母题简化技术。我们将边缘替换为表示基本图案的符号,如楼梯、派系、路径和连接器节点。我们的控制实验和使用场景的结果表明,基序简化的BioFabric是有用的检测和解释网络模式。在不牺牲准确性的情况下,我们的参与者使用简化视图的速度更快,更有信心。我们目前的基序简化方法的有效性取决于现有的布局算法的使用。我们希望我们在用户性能方面的有希望的发现将激励未来对布局算法的研究,以最大化主题呈现。
{"title":"Motif Simplification for BioFabric Network Visualizations: Improving Pattern Recognition and Interpretation.","authors":"Johannes Fuchs, Cody Dunne, Maria-Viktoria Heinle, Daniel A Keim, Sara Di Bartolomeo","doi":"10.1109/TVCG.2025.3634266","DOIUrl":"10.1109/TVCG.2025.3634266","url":null,"abstract":"<p><p>Detecting and interpreting common patterns in relational data is crucial for understanding complex topological structures across various domains. These patterns, or network motifs, can often be detected algorithmically. However, visual inspection remains vital for exploring and discovering patterns. This paper focuses on presenting motifs within BioFabric network visualizations-a unique technique that opens opportunities for research on scaling to larger networks, design variations, and layout algorithms to better expose motifs. Our goal is to show how highlighting motifs can assist users in identifying and interpreting patterns in BioFabric visualizations. To this end, we leverage existing motif simplification techniques. We replace edges with glyphs representing fundamental motifs such as staircases, cliques, paths, and connector nodes. The results of our controlled experiment and usage scenarios demonstrate that motif simplification for BioFabric is useful for detecting and interpreting network patterns. Our participants were faster and more confident using the simplified view without sacrificing accuracy. The efficacy of our current motif simplification approach depends on which extant layout algorithm is used. We hope our promising findings on user performance will motivate future research on layout algorithms tailored to maximizing motif presentation. Our supplemental material is available at https://osf.io/f8s3g/?view_only=7e2df9109dfd4e6c85b89ed828320843.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"604-614"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Large Language Model Behaviors Through Interactive Counterfactual Generation and Analysis. 通过交互反事实生成和分析来理解大型语言模型行为。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3634646
Furui Cheng, Vilem Zouhar, Robin Shing Moon Chan, Daniel Furst, Hendrik Strobelt, Mennatallah El-Assady

Understanding the behavior of large language models (LLMs) is crucial for ensuring their safe and reliable use. However, existing explainable AI (XAI) methods for LLMs primarily rely on word-level explanations, which are often computationally inefficient and misaligned with human reasoning processes. Moreover, these methods often treat explanation as a one-time output, overlooking its inherently interactive and iterative nature. In this paper, we present LLM Analyzer, an interactive visualization system that addresses these limitations by enabling intuitive and efficient exploration of LLM behaviors through counterfactual analysis. Our system features a novel algorithm that generates fluent and semantically meaningful counterfactuals via targeted removal and replacement operations at user-defined levels of granularity. These counterfactuals are used to compute feature attribution scores, which are then integrated with concrete examples in a table-based visualization, supporting dynamic analysis of model behavior. A user study with LLM practitioners and interviews with experts demonstrate the system's usability and effectiveness, emphasizing the importance of involving humans in the explanation process as active participants rather than passive recipients.

理解大型语言模型(llm)的行为对于确保其安全可靠的使用至关重要。然而,法学硕士现有的可解释AI (XAI)方法主要依赖于单词级解释,这通常在计算上效率低下,与人类推理过程不一致。此外,这些方法通常将解释视为一次性输出,忽略了其固有的交互和迭代性质。在本文中,我们提出了LLM Analyzer,这是一个交互式可视化系统,通过反事实分析实现对LLM行为的直观和有效探索,从而解决了这些限制。我们的系统采用了一种新颖的算法,通过在用户定义的粒度级别上进行有针对性的移除和替换操作,生成流畅且语义上有意义的反事实。这些反事实被用来计算特征属性得分,然后将其与基于表的可视化中的具体示例集成,支持模型行为的动态分析。对法学硕士从业者的用户研究和对专家的访谈证明了系统的可用性和有效性,强调了在解释过程中让人类作为主动参与者而不是被动接受者的重要性。
{"title":"Understanding Large Language Model Behaviors Through Interactive Counterfactual Generation and Analysis.","authors":"Furui Cheng, Vilem Zouhar, Robin Shing Moon Chan, Daniel Furst, Hendrik Strobelt, Mennatallah El-Assady","doi":"10.1109/TVCG.2025.3634646","DOIUrl":"10.1109/TVCG.2025.3634646","url":null,"abstract":"<p><p>Understanding the behavior of large language models (LLMs) is crucial for ensuring their safe and reliable use. However, existing explainable AI (XAI) methods for LLMs primarily rely on word-level explanations, which are often computationally inefficient and misaligned with human reasoning processes. Moreover, these methods often treat explanation as a one-time output, overlooking its inherently interactive and iterative nature. In this paper, we present LLM Analyzer, an interactive visualization system that addresses these limitations by enabling intuitive and efficient exploration of LLM behaviors through counterfactual analysis. Our system features a novel algorithm that generates fluent and semantically meaningful counterfactuals via targeted removal and replacement operations at user-defined levels of granularity. These counterfactuals are used to compute feature attribution scores, which are then integrated with concrete examples in a table-based visualization, supporting dynamic analysis of model behavior. A user study with LLM practitioners and interviews with experts demonstrate the system's usability and effectiveness, emphasizing the importance of involving humans in the explanation process as active participants rather than passive recipients.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"846-856"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145575119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1