首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
F2Stories: A Modular Framework for Multi-Objective Optimization of Storylines with a Focus on Fairness. F2Stories:一个关注公平性的多目标故事情节优化的模块化框架。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3634228
Tommaso Piselli, Giuseppe Liotta, Fabrizio Montecchiani, Martin Nollenburg, Sara Di Bartolomeo

Storyline visualizations represent character interactions over time. When these characters belong to different groups, a new research question emerges: how can we balance optimization of readability across the groups while preserving the overall narrative structure of the story? Traditional algorithms that optimize global readability metrics (like minimizing crossings) can introduce quality biases between the different groups based on their cardinality and other aspects of the data. Visual consequences of these biases are: making characters of minority groups disproportionately harder to follow, and visually deprioritizing important characters when their curves become entangled with numerous secondary characters. We present F2Stories, a modular framework that addresses these challenges in storylines by offering three complementary optimization modes: (1) fairnessMode ensures that no group bears a disproportionate burden of visualization complexity regardless of their representation in the story; (2) focusMode allows prioritizing a group of characters while maintaining good readability for secondary characters; and (3) standardMode globally optimizes classical aesthetic metrics. Our approach is based on Mixed Integer Linear Programming (MILP), offering optimality guarantees, precise balancing of competing metrics through weighted objectives, and the flexibility to incorporate complex fairness concepts as additional constraints without the need to redesign the entire algorithm. We conducted an extensive experimental analysis to demonstrate how F2Stories enables more fair or focus group-prioritized storyline visualizations while maintaining adherence to established layout constraints. Our evaluation includes comprehensive results from a detailed case study that shows the effectiveness of our approach in real-world narrative contexts. An open access copy of this paper and all supplemental materials are available at osf.io/e2qvy.

故事情节可视化表示角色随时间的互动。当这些角色属于不同的群体时,一个新的研究问题就出现了:我们如何在保持故事整体叙事结构的同时平衡群体间的可读性优化?优化全局可读性指标(如最小化交叉)的传统算法可能会根据不同组的基数和数据的其他方面在不同组之间引入质量偏差。这些偏见的视觉后果是:使少数群体的角色不成比例地难以跟随,并且当他们的曲线与许多次要角色纠缠在一起时,在视觉上剥夺了重要角色的优先级。我们提出了F2Stories,这是一个模块化框架,通过提供三种互补的优化模式来解决故事情节中的这些挑战:(1)fairnessMode确保没有任何群体承担不成比例的可视化复杂性负担,无论他们在故事中的表现如何;(2) focusMode允许优先处理一组字符,同时保持次要字符的良好可读性;(3) standardMode对经典美学指标进行全局优化。我们的方法基于混合整数线性规划(MILP),提供最优性保证,通过加权目标精确平衡竞争指标,并灵活地将复杂的公平概念作为附加约束,而无需重新设计整个算法。我们进行了广泛的实验分析,以证明F2Stories如何在保持既定布局约束的同时,实现更公平或焦点组优先的故事情节可视化。我们的评估包括一个详细的案例研究的综合结果,显示了我们的方法在现实世界的叙事背景下的有效性。本文的开放获取副本和所有补充材料可在osf.io/e2qvy上获得。
{"title":"F<sup>2</sup>Stories: A Modular Framework for Multi-Objective Optimization of Storylines with a Focus on Fairness.","authors":"Tommaso Piselli, Giuseppe Liotta, Fabrizio Montecchiani, Martin Nollenburg, Sara Di Bartolomeo","doi":"10.1109/TVCG.2025.3634228","DOIUrl":"10.1109/TVCG.2025.3634228","url":null,"abstract":"<p><p>Storyline visualizations represent character interactions over time. When these characters belong to different groups, a new research question emerges: how can we balance optimization of readability across the groups while preserving the overall narrative structure of the story? Traditional algorithms that optimize global readability metrics (like minimizing crossings) can introduce quality biases between the different groups based on their cardinality and other aspects of the data. Visual consequences of these biases are: making characters of minority groups disproportionately harder to follow, and visually deprioritizing important characters when their curves become entangled with numerous secondary characters. We present F2Stories, a modular framework that addresses these challenges in storylines by offering three complementary optimization modes: (1) fairnessMode ensures that no group bears a disproportionate burden of visualization complexity regardless of their representation in the story; (2) focusMode allows prioritizing a group of characters while maintaining good readability for secondary characters; and (3) standardMode globally optimizes classical aesthetic metrics. Our approach is based on Mixed Integer Linear Programming (MILP), offering optimality guarantees, precise balancing of competing metrics through weighted objectives, and the flexibility to incorporate complex fairness concepts as additional constraints without the need to redesign the entire algorithm. We conducted an extensive experimental analysis to demonstrate how F2Stories enables more fair or focus group-prioritized storyline visualizations while maintaining adherence to established layout constraints. Our evaluation includes comprehensive results from a detailed case study that shows the effectiveness of our approach in real-world narrative contexts. An open access copy of this paper and all supplemental materials are available at osf.io/e2qvy.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"747-757"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EncQA: Benchmarking Vision-Language Models on Visual Encodings for Charts. EncQA:图表视觉编码的基准视觉语言模型。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3634249
Kushin Mukherjee, Donghao Ren, Dominik Moritz, Yannick Assogba

Multimodal vision-language models (VLMs) continue to achieve ever-improving scores on chart understanding benchmarks. Yet, we find that this progress does not fully capture the breadth of visual reasoning capabilities essential for interpreting charts. We introduce EncQA, a novel benchmark informed by the visualization literature, designed to provide systematic coverage of visual encodings and analytic tasks that are crucial for chart understanding. EncQA provides 2,076 synthetic question-answer pairs, enabling balanced coverage of six visual encoding channels (position, length, area, color quantitative, color nominal, and shape) and eight tasks (find extrema, retrieve value, find anomaly, filter values, compute derived value exact, compute derived value relative, correlate values, and correlate values relative). Our evaluation of 9 state-of-the-art VLMs reveals that performance varies significantly across encodings within the same task, as well as across tasks. Contrary to expectations, we observe that performance does not improve with model size for many task-encoding pairs. Our results suggest that advancing chart understanding requires targeted strategies addressing specific visual reasoning gaps, rather than solely scaling up model or dataset size.

多模态视觉语言模型(vlm)继续在图表理解基准上取得不断提高的分数。然而,我们发现这一进展并没有完全捕捉到解释图表所必需的视觉推理能力的广度。我们介绍了ENCQA,这是一种基于可视化文献的新型基准,旨在提供对图表理解至关重要的可视化编码和分析任务的系统覆盖。ENCQA提供2076个合成问答对,平衡覆盖6个视觉编码通道(位置、长度、面积、颜色定量、颜色标称和形状)和8个任务(查找极值、检索值、查找异常、过滤值、精确计算派生值、相对计算派生值、相关值和相对关联值)。我们对9个最先进的vlm的评估表明,在同一任务中的不同编码以及不同任务之间,性能差异很大。与预期相反,我们观察到许多任务编码对的性能并没有随着模型大小而提高。我们的研究结果表明,推进图表理解需要有针对性的策略来解决特定的视觉推理差距,而不仅仅是扩大模型或数据集的大小。
{"title":"EncQA: Benchmarking Vision-Language Models on Visual Encodings for Charts.","authors":"Kushin Mukherjee, Donghao Ren, Dominik Moritz, Yannick Assogba","doi":"10.1109/TVCG.2025.3634249","DOIUrl":"10.1109/TVCG.2025.3634249","url":null,"abstract":"<p><p>Multimodal vision-language models (VLMs) continue to achieve ever-improving scores on chart understanding benchmarks. Yet, we find that this progress does not fully capture the breadth of visual reasoning capabilities essential for interpreting charts. We introduce EncQA, a novel benchmark informed by the visualization literature, designed to provide systematic coverage of visual encodings and analytic tasks that are crucial for chart understanding. EncQA provides 2,076 synthetic question-answer pairs, enabling balanced coverage of six visual encoding channels (position, length, area, color quantitative, color nominal, and shape) and eight tasks (find extrema, retrieve value, find anomaly, filter values, compute derived value exact, compute derived value relative, correlate values, and correlate values relative). Our evaluation of 9 state-of-the-art VLMs reveals that performance varies significantly across encodings within the same task, as well as across tasks. Contrary to expectations, we observe that performance does not improve with model size for many task-encoding pairs. Our results suggest that advancing chart understanding requires targeted strategies addressing specific visual reasoning gaps, rather than solely scaling up model or dataset size.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"648-658"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DataWink: Reusing and Adapting SVG-Based Visualization Examples with Large Multimodal Models. DataWink:使用大型多模态模型重用和适应基于svg的可视化示例。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3634635
Liwenhan Xie, Yanna Lin, Can Liu, Huamin Qu, Xinhuan Shu

Creating aesthetically pleasing data visualizations remains challenging for users without design expertise or familiarity with visualization tools. To address this gap, we present DataWink, a system that enables users to create custom visualizations by adapting high-quality examples. Our approach combines large multimodal models (LMMs) to extract data encoding from existing SVG-based visualization examples, featuring an intermediate representation of visualizations that bridges primitive SVG and visualization programs. Users may express adaptation goals to a conversational agent and control the visual appearance through widgets generated on demand. With an interactive interface, users can modify both data mappings and visual design elements while maintaining the original visualization's aesthetic quality. To evaluate DataWink, we conduct a user study (N=12) with replication and free-form exploration tasks. As a result, DataWink is recognized for its learnability and effectiveness in personalized authoring tasks. Our results demonstrate the potential of example-driven approaches for democratizing visualization creation.

对于没有设计专业知识或不熟悉可视化工具的用户来说,创建美观的数据可视化仍然具有挑战性。为了解决这一差距,我们提出了DataWink,这是一个使用户能够通过改编高质量示例来创建自定义可视化的系统。我们的方法结合了大型多模态模型(lmm),从现有的基于SVG的可视化示例中提取数据编码,其特点是可视化的中间表示,连接了原始SVG和可视化程序。用户可以向会话代理表达适应目标,并通过按需生成的小部件控制视觉外观。通过交互界面,用户可以修改数据映射和视觉设计元素,同时保持原始可视化的美学质量。为了评估DataWink,我们进行了一项用户研究(N = 12),其中包括复制和自由形式的探索任务。因此,DataWink因其在个性化创作任务中的易学性和有效性而得到认可。我们的结果证明了示例驱动方法在可视化创建民主化方面的潜力。
{"title":"DataWink: Reusing and Adapting SVG-Based Visualization Examples with Large Multimodal Models.","authors":"Liwenhan Xie, Yanna Lin, Can Liu, Huamin Qu, Xinhuan Shu","doi":"10.1109/TVCG.2025.3634635","DOIUrl":"10.1109/TVCG.2025.3634635","url":null,"abstract":"<p><p>Creating aesthetically pleasing data visualizations remains challenging for users without design expertise or familiarity with visualization tools. To address this gap, we present DataWink, a system that enables users to create custom visualizations by adapting high-quality examples. Our approach combines large multimodal models (LMMs) to extract data encoding from existing SVG-based visualization examples, featuring an intermediate representation of visualizations that bridges primitive SVG and visualization programs. Users may express adaptation goals to a conversational agent and control the visual appearance through widgets generated on demand. With an interactive interface, users can modify both data mappings and visual design elements while maintaining the original visualization's aesthetic quality. To evaluate DataWink, we conduct a user study (N=12) with replication and free-form exploration tasks. As a result, DataWink is recognized for its learnability and effectiveness in personalized authoring tasks. Our results demonstrate the potential of example-driven approaches for democratizing visualization creation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"824-834"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Vision to Touch: Bridging Visual and Tactile Principles for Accessible Data Representation. 从视觉到触觉:为可访问数据表示架起视觉和触觉的桥梁。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3634254
Kim Marriott, Matthew Butler, Leona Holloway, William Jolley, Bongshin Lee, Bruce Maguire, Danielle Albers Szafir

Tactile graphics are widely used to present maps and statistical diagrams to blind and low vision (BLV) people, with accessibility guidelines recommending their use for graphics where spatial relationships are important. Their use is expected to grow with the advent of commodity refreshable tactile displays. However, in stark contrast to visual information graphics, we lack a clear understanding of the benefts that well-designed tactile information graphics offer over text descriptions for BLV people. To address this gap, we introduce a framework considering the three components of encoding, perception and cognition to examine the known benefts for visual information graphics and explore their applicability to tactile information graphics. This work establishes a preliminary theoretical foundation for the tactile-frst design of information graphics and identifes future research avenues.

触觉图形被广泛用于向盲人和低视力(BLV)人群展示地图和统计图表,可访问性指南建议在空间关系重要的地方使用触觉图形。随着可更新的触觉显示器的出现,它们的使用预计将会增长。然而,与视觉信息图形形成鲜明对比的是,我们对设计良好的触觉信息图形比文本描述对BLV用户的好处缺乏清晰的认识。为了解决这一差距,我们引入了一个考虑编码、感知和认知三个组成部分的框架,以检验视觉信息图形的已知好处,并探讨它们在触觉信息图形中的适用性。本工作为触觉优先的信息图形设计奠定了初步的理论基础,并确定了未来的研究途径。
{"title":"From Vision to Touch: Bridging Visual and Tactile Principles for Accessible Data Representation.","authors":"Kim Marriott, Matthew Butler, Leona Holloway, William Jolley, Bongshin Lee, Bruce Maguire, Danielle Albers Szafir","doi":"10.1109/TVCG.2025.3634254","DOIUrl":"10.1109/TVCG.2025.3634254","url":null,"abstract":"<p><p>Tactile graphics are widely used to present maps and statistical diagrams to blind and low vision (BLV) people, with accessibility guidelines recommending their use for graphics where spatial relationships are important. Their use is expected to grow with the advent of commodity refreshable tactile displays. However, in stark contrast to visual information graphics, we lack a clear understanding of the benefts that well-designed tactile information graphics offer over text descriptions for BLV people. To address this gap, we introduce a framework considering the three components of encoding, perception and cognition to examine the known benefts for visual information graphics and explore their applicability to tactile information graphics. This work establishes a preliminary theoretical foundation for the tactile-frst design of information graphics and identifes future research avenues.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"659-669"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145688789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VizGenie: Toward Self-Refining, Domain-Aware Workflows for Next-Generation Scientific Visualization. VizGenie:面向下一代科学可视化的自精炼、领域感知工作流。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3634655
Ayan Biswas, Terece L Turton, Nishath Rajiv Ranasinghe, Shawn Jones, Bradley Love, William Jones, Aric Hagberg, Han-Wei Shen, Nathan DeBardeleben, Earl Lawrence

We present VizGenie, a self-improving, agentic framework that advances scientific visualization through large language model (LLM) by orchestrating of a collection of domain-specific and dynamically generated modules. Users initially access core functionalities-such as threshold-based filtering, slice extraction, and statistical analysis-through pre-existing tools. For tasks beyond this baseline, VizGenie autonomously employs LLMs to generate new visualization scripts (e.g., VTK Python code), expanding its capabilities on-demand. Each generated script undergoes automated backend validation and is seamlessly integrated upon successful testing, continuously enhancing the system's adaptability and robustness. A distinctive feature of VizGenie is its intuitive natural language interface, allowing users to issue high-level feature-based queries (e.g., "visualize the skull" or "highlight tissue boundaries"). The system leverages image-based analysis and visual question answering (VQA) via fine-tuned vision models to interpret these queries precisely, bridging domain expertise and technical implementation. Additionally, users can interactively query generated visualizations through VQA, facilitating deeper exploration. Reliability and reproducibility are further strengthened by Retrieval-Augmented Generation (RAG), providing context-driven responses while maintaining comprehensive provenance records. Evaluations on complex volumetric datasets demonstrate significant reductions in cognitive overhead for iterative visualization tasks. By integrating curated domain-specific tools with LLM-driven flexibility, VizGenie not only accelerates insight generation but also establishes a sustainable, continuously evolving visualization practice. The resulting platform dynamically learns from user interactions, consistently enhancing support for feature-centric exploration and reproducible research in scientific visualization.

我们提出了VizGenie,一个自我改进的代理框架,通过编排一组特定领域和动态生成的模块,通过大型语言模型(LLM)推进科学可视化。用户最初通过预先存在的工具访问核心功能——比如基于阈值的过滤、切片提取和统计分析。对于超出此基线的任务,VizGenie自主使用llm来生成新的可视化脚本(例如,VTK Python代码),按需扩展其功能。每个生成的脚本都经过自动化的后端验证,并在成功的测试中无缝集成,不断增强系统的适应性和健壮性。VizGenie的一个显著特点是其直观的自然语言界面,允许用户发布基于高级特征的查询(例如,“可视化头骨”或“突出组织边界”)。该系统利用基于图像的分析和视觉问答(VQA),通过微调的视觉模型来精确地解释这些查询,桥接领域专业知识和技术实现。此外,用户可以通过VQA交互式地查询生成的可视化,从而促进更深入的探索。通过检索增强生成(RAG),提供上下文驱动的响应,同时保持全面的来源记录,进一步加强了可靠性和可重复性。对复杂体积数据集的评估表明,迭代可视化任务的认知开销显著降低。通过将特定领域的工具与法学硕士驱动的灵活性相结合,VizGenie不仅加速了洞察力的产生,而且建立了可持续的、不断发展的可视化实践。由此产生的平台从用户交互中动态学习,不断增强对科学可视化中以特征为中心的探索和可重复研究的支持。
{"title":"VizGenie: Toward Self-Refining, Domain-Aware Workflows for Next-Generation Scientific Visualization.","authors":"Ayan Biswas, Terece L Turton, Nishath Rajiv Ranasinghe, Shawn Jones, Bradley Love, William Jones, Aric Hagberg, Han-Wei Shen, Nathan DeBardeleben, Earl Lawrence","doi":"10.1109/TVCG.2025.3634655","DOIUrl":"10.1109/TVCG.2025.3634655","url":null,"abstract":"<p><p>We present VizGenie, a self-improving, agentic framework that advances scientific visualization through large language model (LLM) by orchestrating of a collection of domain-specific and dynamically generated modules. Users initially access core functionalities-such as threshold-based filtering, slice extraction, and statistical analysis-through pre-existing tools. For tasks beyond this baseline, VizGenie autonomously employs LLMs to generate new visualization scripts (e.g., VTK Python code), expanding its capabilities on-demand. Each generated script undergoes automated backend validation and is seamlessly integrated upon successful testing, continuously enhancing the system's adaptability and robustness. A distinctive feature of VizGenie is its intuitive natural language interface, allowing users to issue high-level feature-based queries (e.g., \"visualize the skull\" or \"highlight tissue boundaries\"). The system leverages image-based analysis and visual question answering (VQA) via fine-tuned vision models to interpret these queries precisely, bridging domain expertise and technical implementation. Additionally, users can interactively query generated visualizations through VQA, facilitating deeper exploration. Reliability and reproducibility are further strengthened by Retrieval-Augmented Generation (RAG), providing context-driven responses while maintaining comprehensive provenance records. Evaluations on complex volumetric datasets demonstrate significant reductions in cognitive overhead for iterative visualization tasks. By integrating curated domain-specific tools with LLM-driven flexibility, VizGenie not only accelerates insight generation but also establishes a sustainable, continuously evolving visualization practice. The resulting platform dynamically learns from user interactions, consistently enhancing support for feature-centric exploration and reproducible research in scientific visualization.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1021-1031"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145688881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motif Simplification for BioFabric Network Visualizations: Improving Pattern Recognition and Interpretation. 生物织物网络可视化的基序简化:改进模式识别和解释。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3634266
Johannes Fuchs, Cody Dunne, Maria-Viktoria Heinle, Daniel A Keim, Sara Di Bartolomeo

Detecting and interpreting common patterns in relational data is crucial for understanding complex topological structures across various domains. These patterns, or network motifs, can often be detected algorithmically. However, visual inspection remains vital for exploring and discovering patterns. This paper focuses on presenting motifs within BioFabric network visualizations-a unique technique that opens opportunities for research on scaling to larger networks, design variations, and layout algorithms to better expose motifs. Our goal is to show how highlighting motifs can assist users in identifying and interpreting patterns in BioFabric visualizations. To this end, we leverage existing motif simplification techniques. We replace edges with glyphs representing fundamental motifs such as staircases, cliques, paths, and connector nodes. The results of our controlled experiment and usage scenarios demonstrate that motif simplification for BioFabric is useful for detecting and interpreting network patterns. Our participants were faster and more confident using the simplified view without sacrificing accuracy. The efficacy of our current motif simplification approach depends on which extant layout algorithm is used. We hope our promising findings on user performance will motivate future research on layout algorithms tailored to maximizing motif presentation. Our supplemental material is available at https://osf.io/f8s3g/?view_only=7e2df9109dfd4e6c85b89ed828320843.

检测和解释关系数据中的公共模式对于理解跨不同领域的复杂拓扑结构至关重要。这些模式或网络基序通常可以通过算法检测到。然而,视觉检查对于探索和发现模式仍然至关重要。本文着重于在BioFabric网络可视化中呈现图案,这是一种独特的技术,为研究扩展到更大的网络、设计变化和布局算法以更好地展示图案提供了机会。我们的目标是展示高亮的主题如何帮助用户识别和解释BioFabric可视化中的模式。为此,我们利用现有的母题简化技术。我们将边缘替换为表示基本图案的符号,如楼梯、派系、路径和连接器节点。我们的控制实验和使用场景的结果表明,基序简化的BioFabric是有用的检测和解释网络模式。在不牺牲准确性的情况下,我们的参与者使用简化视图的速度更快,更有信心。我们目前的基序简化方法的有效性取决于现有的布局算法的使用。我们希望我们在用户性能方面的有希望的发现将激励未来对布局算法的研究,以最大化主题呈现。
{"title":"Motif Simplification for BioFabric Network Visualizations: Improving Pattern Recognition and Interpretation.","authors":"Johannes Fuchs, Cody Dunne, Maria-Viktoria Heinle, Daniel A Keim, Sara Di Bartolomeo","doi":"10.1109/TVCG.2025.3634266","DOIUrl":"10.1109/TVCG.2025.3634266","url":null,"abstract":"<p><p>Detecting and interpreting common patterns in relational data is crucial for understanding complex topological structures across various domains. These patterns, or network motifs, can often be detected algorithmically. However, visual inspection remains vital for exploring and discovering patterns. This paper focuses on presenting motifs within BioFabric network visualizations-a unique technique that opens opportunities for research on scaling to larger networks, design variations, and layout algorithms to better expose motifs. Our goal is to show how highlighting motifs can assist users in identifying and interpreting patterns in BioFabric visualizations. To this end, we leverage existing motif simplification techniques. We replace edges with glyphs representing fundamental motifs such as staircases, cliques, paths, and connector nodes. The results of our controlled experiment and usage scenarios demonstrate that motif simplification for BioFabric is useful for detecting and interpreting network patterns. Our participants were faster and more confident using the simplified view without sacrificing accuracy. The efficacy of our current motif simplification approach depends on which extant layout algorithm is used. We hope our promising findings on user performance will motivate future research on layout algorithms tailored to maximizing motif presentation. Our supplemental material is available at https://osf.io/f8s3g/?view_only=7e2df9109dfd4e6c85b89ed828320843.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"604-614"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Large Language Model Behaviors Through Interactive Counterfactual Generation and Analysis. 通过交互反事实生成和分析来理解大型语言模型行为。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3634646
Furui Cheng, Vilem Zouhar, Robin Shing Moon Chan, Daniel Furst, Hendrik Strobelt, Mennatallah El-Assady

Understanding the behavior of large language models (LLMs) is crucial for ensuring their safe and reliable use. However, existing explainable AI (XAI) methods for LLMs primarily rely on word-level explanations, which are often computationally inefficient and misaligned with human reasoning processes. Moreover, these methods often treat explanation as a one-time output, overlooking its inherently interactive and iterative nature. In this paper, we present LLM Analyzer, an interactive visualization system that addresses these limitations by enabling intuitive and efficient exploration of LLM behaviors through counterfactual analysis. Our system features a novel algorithm that generates fluent and semantically meaningful counterfactuals via targeted removal and replacement operations at user-defined levels of granularity. These counterfactuals are used to compute feature attribution scores, which are then integrated with concrete examples in a table-based visualization, supporting dynamic analysis of model behavior. A user study with LLM practitioners and interviews with experts demonstrate the system's usability and effectiveness, emphasizing the importance of involving humans in the explanation process as active participants rather than passive recipients.

理解大型语言模型(llm)的行为对于确保其安全可靠的使用至关重要。然而,法学硕士现有的可解释AI (XAI)方法主要依赖于单词级解释,这通常在计算上效率低下,与人类推理过程不一致。此外,这些方法通常将解释视为一次性输出,忽略了其固有的交互和迭代性质。在本文中,我们提出了LLM Analyzer,这是一个交互式可视化系统,通过反事实分析实现对LLM行为的直观和有效探索,从而解决了这些限制。我们的系统采用了一种新颖的算法,通过在用户定义的粒度级别上进行有针对性的移除和替换操作,生成流畅且语义上有意义的反事实。这些反事实被用来计算特征属性得分,然后将其与基于表的可视化中的具体示例集成,支持模型行为的动态分析。对法学硕士从业者的用户研究和对专家的访谈证明了系统的可用性和有效性,强调了在解释过程中让人类作为主动参与者而不是被动接受者的重要性。
{"title":"Understanding Large Language Model Behaviors Through Interactive Counterfactual Generation and Analysis.","authors":"Furui Cheng, Vilem Zouhar, Robin Shing Moon Chan, Daniel Furst, Hendrik Strobelt, Mennatallah El-Assady","doi":"10.1109/TVCG.2025.3634646","DOIUrl":"10.1109/TVCG.2025.3634646","url":null,"abstract":"<p><p>Understanding the behavior of large language models (LLMs) is crucial for ensuring their safe and reliable use. However, existing explainable AI (XAI) methods for LLMs primarily rely on word-level explanations, which are often computationally inefficient and misaligned with human reasoning processes. Moreover, these methods often treat explanation as a one-time output, overlooking its inherently interactive and iterative nature. In this paper, we present LLM Analyzer, an interactive visualization system that addresses these limitations by enabling intuitive and efficient exploration of LLM behaviors through counterfactual analysis. Our system features a novel algorithm that generates fluent and semantically meaningful counterfactuals via targeted removal and replacement operations at user-defined levels of granularity. These counterfactuals are used to compute feature attribution scores, which are then integrated with concrete examples in a table-based visualization, supporting dynamic analysis of model behavior. A user study with LLM practitioners and interviews with experts demonstrate the system's usability and effectiveness, emphasizing the importance of involving humans in the explanation process as active participants rather than passive recipients.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"846-856"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145575119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PixelatedScatter: Arbitrary-Level Visual Abstraction for Large-Scale Multiclass Scatterplots. PixelatedScatter:大规模多类散点图的任意级别视觉抽象。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3633908
Ziheng Guo, Tianxiang Wei, Zeyu Li, Lianghao Zhang, Sisi Li, Jiawan Zhang

Overdraw is inevitable in large-scale scatterplots. Current scatterplot abstraction methods lose features in medium-to-low density regions. We propose a visual abstraction method designed to provide better feature preservation across arbitrary abstraction levels for large-scale scatterplots, particularly in medium-to-low density regions. The method consists of three closely interconnected steps: first, we partition the scatterplot into iso-density regions and equalize visual density; then, we allocate pixels for different classes within each region; finally, we reconstruct the data distribution based on pixels. User studies, quantitative and qualitative evaluations demonstrate that, compared to previous methods, our approach better preserves features and exhibits a special advantage when handling ultra-high dynamic range data distributions.

在大规模散点图中,透支是不可避免的。目前的散点图抽象方法在中低密度区域会丢失特征。我们提出了一种视觉抽象方法,旨在为大规模散点图提供更好的特征保存,特别是在中低密度区域。该方法由三个紧密相连的步骤组成:首先,我们将散点图划分为等密度区域并均衡视觉密度;然后,我们在每个区域内为不同的类别分配像素;最后,基于像素重构数据分布。用户研究、定量和定性评估表明,与以前的方法相比,我们的方法更好地保留了特征,并在处理超高动态范围数据分布时表现出特殊的优势。
{"title":"PixelatedScatter: Arbitrary-Level Visual Abstraction for Large-Scale Multiclass Scatterplots.","authors":"Ziheng Guo, Tianxiang Wei, Zeyu Li, Lianghao Zhang, Sisi Li, Jiawan Zhang","doi":"10.1109/TVCG.2025.3633908","DOIUrl":"10.1109/TVCG.2025.3633908","url":null,"abstract":"<p><p>Overdraw is inevitable in large-scale scatterplots. Current scatterplot abstraction methods lose features in medium-to-low density regions. We propose a visual abstraction method designed to provide better feature preservation across arbitrary abstraction levels for large-scale scatterplots, particularly in medium-to-low density regions. The method consists of three closely interconnected steps: first, we partition the scatterplot into iso-density regions and equalize visual density; then, we allocate pixels for different classes within each region; finally, we reconstruct the data distribution based on pixels. User studies, quantitative and qualitative evaluations demonstrate that, compared to previous methods, our approach better preserves features and exhibits a special advantage when handling ultra-high dynamic range data distributions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"123-133"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145598224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SynAnno: Interactive Guided Proofreading of Synaptic Annotations. SynAnno: syntic注释的交互式引导校对。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3634824
Leander Lauenburg, Jakob Troidl, Adam Gohain, Zudi Lin, Hanspeter Pfister, Donglai Wei

Connectomics, a subfield of neuroscience, aims to map and analyze synapse-level wiring diagrams of the nervous system. While recent advances in deep learning have accelerated automated neuron and synapse segmentation, reconstructing accurate connectomes still demands extensive human proofreading to correct segmentation errors. We present SynAnno, an interactive tool designed to streamline and enhance the proofreading of synaptic annotations in large-scale connectomics datasets. SynAnno integrates into existing neuroscience workflows by enabling guided, neuron-centric proofreading. To address the challenges posed by the complex spatial branching of neurons, it introduces a structured workflow with an optimized traversal path and a 3D mini-map for tracking progress. In addition, SynAnno incorporates fine-tuned machine learning models to assist with error detection and correction, reducing the manual burden and increasing proofreading efficiency. We evaluate SynAnno through a user and case study involving seven neuroscience experts. Results show that SynAnno significantly accelerates synapse proofreading while reducing cognitive load and annotation errors through structured guidance and visualization support. The source code and interactive demo are available at: https://github.com/PytorchConnectomics/SynAnno.

连接组学是神经科学的一个分支,旨在绘制和分析神经系统的突触级接线图。虽然深度学习的最新进展加速了自动神经元和突触分割,但重建准确的连接体仍然需要大量的人工校对来纠正分割错误。我们介绍了SynAnno,一个交互式工具,旨在简化和增强大规模连接组学数据集中突触注释的校对。SynAnno集成到现有的神经科学工作流程,使引导,以神经元为中心的校对。为了解决神经元复杂的空间分支所带来的挑战,它引入了一个结构化的工作流,其中包含优化的遍历路径和用于跟踪进度的3D迷你地图。此外,SynAnno集成了微调的机器学习模型,以协助错误检测和纠正,减少人工负担,提高校对效率。我们通过涉及7位神经科学专家的用户和案例研究来评估SynAnno。结果表明,SynAnno通过结构化引导和可视化支持,显著加快了突触校对速度,减少了认知负荷和标注错误。源代码和交互式演示可在:https://github.com/PytorchConnectomics/SynAnno。
{"title":"SynAnno: Interactive Guided Proofreading of Synaptic Annotations.","authors":"Leander Lauenburg, Jakob Troidl, Adam Gohain, Zudi Lin, Hanspeter Pfister, Donglai Wei","doi":"10.1109/TVCG.2025.3634824","DOIUrl":"10.1109/TVCG.2025.3634824","url":null,"abstract":"<p><p>Connectomics, a subfield of neuroscience, aims to map and analyze synapse-level wiring diagrams of the nervous system. While recent advances in deep learning have accelerated automated neuron and synapse segmentation, reconstructing accurate connectomes still demands extensive human proofreading to correct segmentation errors. We present SynAnno, an interactive tool designed to streamline and enhance the proofreading of synaptic annotations in large-scale connectomics datasets. SynAnno integrates into existing neuroscience workflows by enabling guided, neuron-centric proofreading. To address the challenges posed by the complex spatial branching of neurons, it introduces a structured workflow with an optimized traversal path and a 3D mini-map for tracking progress. In addition, SynAnno incorporates fine-tuned machine learning models to assist with error detection and correction, reducing the manual burden and increasing proofreading efficiency. We evaluate SynAnno through a user and case study involving seven neuroscience experts. Results show that SynAnno significantly accelerates synapse proofreading while reducing cognitive load and annotation errors through structured guidance and visualization support. The source code and interactive demo are available at: https://github.com/PytorchConnectomics/SynAnno.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"429-439"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145764747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
F-Hash: Feature-Based Hash Design for Time-Varying Volume Visualization via Multi-Resolution Tesseract Encoding. F-Hash:基于多分辨率Tesseract编码的时变体积可视化特征哈希设计。
IF 6.5 Pub Date : 2026-01-01 DOI: 10.1109/TVCG.2025.3634812
Jianxin Sun, David Lenz, Hongfeng Yu, Tom Peterka

Interactive time-varying volume visualization is challenging due to its complex spatiotemporal features and sheer size of the dataset. Recent works transform the original discrete time-varying volumetric data into continuous Implicit Neural Representations (INR) to address the issues of compression, rendering, and super-resolution in both spatial and temporal domains. However, training the INR takes a long time to converge, especially when handling large-scale time-varying volumetric datasets. In this work, we proposed F-Hash, a novel feature-based multi-resolution Tesseract encoding architecture to greatly enhance the convergence speed compared with existing input encoding methods for modeling time-varying volumetric data. The proposed design incorporates multi-level collision-free hash functions that map dynamic 4D multi-resolution embedding grids without bucket waste, achieving high encoding capacity with compact encoding parameters. Our encoding method is agnostic to time-varying feature detection methods, making it a unified encoding solution for feature tracking and evolution visualization. Experiments show the F-Hash achieves state-of-the-art convergence speed in training various time-varying volumetric datasets for diverse features. We also proposed an adaptive ray marching algorithm to optimize the sample streaming for faster rendering of the time-varying neural representation.

交互式时变体可视化由于其复杂的时空特征和庞大的数据集规模而具有挑战性。最近的工作将原始的离散时变体积数据转换为连续的隐式神经表征(INR),以解决空间和时间域的压缩、渲染和超分辨率问题。然而,训练INR需要很长时间才能收敛,特别是在处理大规模时变体积数据集时。在这项工作中,我们提出了F-Hash,一种新的基于特征的多分辨率Tesseract编码架构,与现有的用于时变体积数据建模的输入编码方法相比,它大大提高了收敛速度。该设计采用多级无碰撞哈希函数映射动态4D多分辨率嵌入网格,没有桶浪费,以紧凑的编码参数实现高编码容量。该编码方法不受时变特征检测方法的影响,为特征跟踪和进化可视化提供了统一的编码解决方案。实验表明,F-Hash在训练不同特征的各种时变体积数据集时达到了最先进的收敛速度。我们还提出了一种自适应射线行进算法来优化样本流,以更快地呈现时变神经表示。
{"title":"F-Hash: Feature-Based Hash Design for Time-Varying Volume Visualization via Multi-Resolution Tesseract Encoding.","authors":"Jianxin Sun, David Lenz, Hongfeng Yu, Tom Peterka","doi":"10.1109/TVCG.2025.3634812","DOIUrl":"10.1109/TVCG.2025.3634812","url":null,"abstract":"<p><p>Interactive time-varying volume visualization is challenging due to its complex spatiotemporal features and sheer size of the dataset. Recent works transform the original discrete time-varying volumetric data into continuous Implicit Neural Representations (INR) to address the issues of compression, rendering, and super-resolution in both spatial and temporal domains. However, training the INR takes a long time to converge, especially when handling large-scale time-varying volumetric datasets. In this work, we proposed F-Hash, a novel feature-based multi-resolution Tesseract encoding architecture to greatly enhance the convergence speed compared with existing input encoding methods for modeling time-varying volumetric data. The proposed design incorporates multi-level collision-free hash functions that map dynamic 4D multi-resolution embedding grids without bucket waste, achieving high encoding capacity with compact encoding parameters. Our encoding method is agnostic to time-varying feature detection methods, making it a unified encoding solution for feature tracking and evolution visualization. Experiments show the F-Hash achieves state-of-the-art convergence speed in training various time-varying volumetric datasets for diverse features. We also proposed an adaptive ray marching algorithm to optimize the sample streaming for faster rendering of the time-varying neural representation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"396-406"},"PeriodicalIF":6.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1