首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
Contextualization or Rationalization? The Effect of Causal Priors on Data Visualization Interpretation.
IF 6.5 Pub Date : 2026-02-09 DOI: 10.1109/TVCG.2026.3663050
Arran Zeyu Wang, David Borland, Estella Calcaterra, David Gotz

Understanding how individuals interpret charts is a crucial concern for visual data communication. This imperative has motivated a number of studies, including past work demonstrating that causal priors-a priori belief about causal relationships between concepts-can have significant influences on the perceived strength of variable relationships inferred from visualizations. This paper builds on these previous results, demonstrating that causal priors can also influence the types of patterns that people perceive as the most salient within ambiguous scatterplots that have roughly equal evidence for trend and cluster patterns. Using a mixed-design approach that combines a largescale online experiment for breadth of findings with an in-person think-aloud study for analytical depth, we investigated how users' interpretations are influenced by the interplay between causal priors and the visualized data patterns. Our analysis suggests two archetypal reasoning behaviors through which people often make their observations: contextualization, in which users accept a visual pattern that aligns with causal priors and use their existing knowledge to enrich interpretation, and rationalization, in which users encounter a pattern that conflicts with causal priors and attempt to explain away the discrepancy by invoking external factors, such as positing confounding variables or data selection bias. These findings provide initial evidence highlighting the critical role of causal priors in shaping high-level visualization comprehension, and introduce a vocabulary for describing how users reason about data that either confirms or challenges prior beliefs of causality.

{"title":"Contextualization or Rationalization? The Effect of Causal Priors on Data Visualization Interpretation.","authors":"Arran Zeyu Wang, David Borland, Estella Calcaterra, David Gotz","doi":"10.1109/TVCG.2026.3663050","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3663050","url":null,"abstract":"<p><p>Understanding how individuals interpret charts is a crucial concern for visual data communication. This imperative has motivated a number of studies, including past work demonstrating that causal priors-a priori belief about causal relationships between concepts-can have significant influences on the perceived strength of variable relationships inferred from visualizations. This paper builds on these previous results, demonstrating that causal priors can also influence the types of patterns that people perceive as the most salient within ambiguous scatterplots that have roughly equal evidence for trend and cluster patterns. Using a mixed-design approach that combines a largescale online experiment for breadth of findings with an in-person think-aloud study for analytical depth, we investigated how users' interpretations are influenced by the interplay between causal priors and the visualized data patterns. Our analysis suggests two archetypal reasoning behaviors through which people often make their observations: contextualization, in which users accept a visual pattern that aligns with causal priors and use their existing knowledge to enrich interpretation, and rationalization, in which users encounter a pattern that conflicts with causal priors and attempt to explain away the discrepancy by invoking external factors, such as positing confounding variables or data selection bias. These findings provide initial evidence highlighting the critical role of causal priors in shaping high-level visualization comprehension, and introduce a vocabulary for describing how users reason about data that either confirms or challenges prior beliefs of causality.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146151584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AnchorCrafter: Animate Cyber-Anchors Selling Your Products via Human-Object Interacting Video Generation.
IF 6.5 Pub Date : 2026-02-09 DOI: 10.1109/TVCG.2026.3662720
Ziyi Xu, Ziyao Huang, Juan Cao, Yong Zhang, Xiaodong Cun, Qing Shuai, Yuchen Wang, Linchao Bao, Fan Tang

The generation of anchor-style product promotion videos presents promising opportunities in e-commerce, advertising, and consumer engagement. Despite advancements in pose-guided human video generation, creating product promotion videos remains challenging. In addressing this challenge, we identify the integration of human-object interactions (HOI) into pose-guided human video generation as a core issue. To this end, we introduce AnchorCrafter, a novel diffusion-based system designed to generate 2D videos featuring a target human and a customized object, achieving high visual fidelity and controllable interactions. Specifically, we propose two key innovations: the HOI-appearance perception, which enhances object appearance recognition from arbitrary multi-view perspectives and disentangles object and human appearance, and the HOI-motion injection, which enables complex human-object interactions by overcoming challenges in object trajectory conditioning and inter-occlusion management. Extensive experiments show that our system improves object appearance preservation by 7.5%, and achieves the best video quality compared to existing state-of-the-art approaches. It also outperforms existing approaches in maintaining human motion consistency and high-quality video generation. Project page including data, code, and Huggingface demo: https://github.com/cangcz/AnchorCrafter.

{"title":"AnchorCrafter: Animate Cyber-Anchors Selling Your Products via Human-Object Interacting Video Generation.","authors":"Ziyi Xu, Ziyao Huang, Juan Cao, Yong Zhang, Xiaodong Cun, Qing Shuai, Yuchen Wang, Linchao Bao, Fan Tang","doi":"10.1109/TVCG.2026.3662720","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3662720","url":null,"abstract":"<p><p>The generation of anchor-style product promotion videos presents promising opportunities in e-commerce, advertising, and consumer engagement. Despite advancements in pose-guided human video generation, creating product promotion videos remains challenging. In addressing this challenge, we identify the integration of human-object interactions (HOI) into pose-guided human video generation as a core issue. To this end, we introduce AnchorCrafter, a novel diffusion-based system designed to generate 2D videos featuring a target human and a customized object, achieving high visual fidelity and controllable interactions. Specifically, we propose two key innovations: the HOI-appearance perception, which enhances object appearance recognition from arbitrary multi-view perspectives and disentangles object and human appearance, and the HOI-motion injection, which enables complex human-object interactions by overcoming challenges in object trajectory conditioning and inter-occlusion management. Extensive experiments show that our system improves object appearance preservation by 7.5%, and achieves the best video quality compared to existing state-of-the-art approaches. It also outperforms existing approaches in maintaining human motion consistency and high-quality video generation. Project page including data, code, and Huggingface demo: https://github.com/cangcz/AnchorCrafter.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146151593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HiFormer: Hierarchical Transformer with Box-packed Positional Encoding for 3D Part Assembly.
IF 6.5 Pub Date : 2026-02-09 DOI: 10.1109/TVCG.2026.3662816
Songle Chen, Lulu Dong, Yijiao Zhou, Siguang Chen, Kai Xu

Estimating the 6-DoF posture of parts in assembly-based modeling is a critical task in the fields of computer graphics, computer vision and robotics. A typical scenario involves enabling a machine agent to automatically assemble IKEA furniture using the provided parts. This paper presents HiFormer, a novel Hierarchical Transformer with Box-packed Positional Encoding, designed for highly automatic 3D part assembly. Our method addresses three important issues commonly encountered in 3D part assembly: 1) How to mitigate the overfitting problem associated with Transformer-based feature learning for 3D point clouds? 2) How to effectively model the relationships between the intragroup and intergroup parts? 3) How to compute positional encoding and integrate it into the Transformer for parts with diverse geometric forms in the coarse-to-fine assembly process? These challenges are tackled through three key contributions: 1) a multi-task 3D Swin Transformer with a two-stage training strategy for feature extraction, 2) a novel hierarchical Transformer for capturing part relationships at flattening, intragroup, and intergroup levels, and 3) an innovative box-packed positional encoding that enhances the Transformer by incorporating query, key, and value information derived from relative box positions. On the PartNet benchmark, our method outperforms the state-of-the-art PWH-MP model on three representative categories-Chair, Table, and Lamp-, achieving average improvements of 2.84% in Part Accuracy (PA) and 3.72% in Connection Accuracy (CA) for diversity modeling (with noise), and 3.55% in PA and 3.21% in CA for deterministic modeling (without noise).

{"title":"HiFormer: Hierarchical Transformer with Box-packed Positional Encoding for 3D Part Assembly.","authors":"Songle Chen, Lulu Dong, Yijiao Zhou, Siguang Chen, Kai Xu","doi":"10.1109/TVCG.2026.3662816","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3662816","url":null,"abstract":"<p><p>Estimating the 6-DoF posture of parts in assembly-based modeling is a critical task in the fields of computer graphics, computer vision and robotics. A typical scenario involves enabling a machine agent to automatically assemble IKEA furniture using the provided parts. This paper presents HiFormer, a novel Hierarchical Transformer with Box-packed Positional Encoding, designed for highly automatic 3D part assembly. Our method addresses three important issues commonly encountered in 3D part assembly: 1) How to mitigate the overfitting problem associated with Transformer-based feature learning for 3D point clouds? 2) How to effectively model the relationships between the intragroup and intergroup parts? 3) How to compute positional encoding and integrate it into the Transformer for parts with diverse geometric forms in the coarse-to-fine assembly process? These challenges are tackled through three key contributions: 1) a multi-task 3D Swin Transformer with a two-stage training strategy for feature extraction, 2) a novel hierarchical Transformer for capturing part relationships at flattening, intragroup, and intergroup levels, and 3) an innovative box-packed positional encoding that enhances the Transformer by incorporating query, key, and value information derived from relative box positions. On the PartNet benchmark, our method outperforms the state-of-the-art PWH-MP model on three representative categories-Chair, Table, and Lamp-, achieving average improvements of 2.84% in Part Accuracy (PA) and 3.72% in Connection Accuracy (CA) for diversity modeling (with noise), and 3.55% in PA and 3.21% in CA for deterministic modeling (without noise).</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146151531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explorative Analysis of Dynamic Force Networks in 2D Photoelastic Disks Ensembles. 二维光弹性圆盘系综中动力网络的探索性分析。
IF 6.5 Pub Date : 2026-02-06 DOI: 10.1109/TVCG.2026.3660683
Farhan Rasheed, Abrar Naseer, Talha Bin Masood, Tejas G Murthy, Vijay Natarajan, Ingrid Hotz

This paper presents an interactive analysis framework for exploring data from photoelastic disk experiments, which serve as a model for two-dimensional granular materials. Granular materials, composed of discrete particles such as sand or gravel, exhibit behaviors resembling fluid or solid states depending on the system configuration. These behaviors arise from interparticle contact forces, which form complex force networks that govern the material's macroscopic behavior. Our framework is specifically designed to analyze such 2D ensembles of dynamic force networks, enabling the identification and characterization of their underlying structures. The framework is built around a topology-based, multiscale data segmentation in terms of force chains and cycles. The analysis methods are structured across three levels: (1) multiscale analysis of individual instances under specific loading conditions, (2) detailed exploration of single experiments encompassing a series of loading and unloading cycles, and (3) comparative analysis across experiments conducted under similar and differing setups. We demonstrate the capabilities of our framework with a case study for each of these levels.

本文提出了一个用于探索光弹性盘实验数据的交互式分析框架,该框架可作为二维颗粒材料的模型。颗粒状材料,由离散的颗粒组成,如沙子或砾石,根据系统配置表现出类似流体或固体状态的行为。这些行为源于粒子间的接触力,形成了复杂的力网络,控制着材料的宏观行为。我们的框架是专门设计来分析这种动态力网络的二维集合,使其底层结构的识别和表征。该框架是围绕基于拓扑的、基于力链和周期的多尺度数据分割而构建的。分析方法分为三个层次:(1)特定加载条件下个体实例的多尺度分析,(2)包含一系列加载和卸载循环的单个实验的详细探索,以及(3)在相似和不同设置下进行的实验之间的比较分析。我们通过对每个级别的案例研究来演示框架的功能。
{"title":"Explorative Analysis of Dynamic Force Networks in 2D Photoelastic Disks Ensembles.","authors":"Farhan Rasheed, Abrar Naseer, Talha Bin Masood, Tejas G Murthy, Vijay Natarajan, Ingrid Hotz","doi":"10.1109/TVCG.2026.3660683","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3660683","url":null,"abstract":"<p><p>This paper presents an interactive analysis framework for exploring data from photoelastic disk experiments, which serve as a model for two-dimensional granular materials. Granular materials, composed of discrete particles such as sand or gravel, exhibit behaviors resembling fluid or solid states depending on the system configuration. These behaviors arise from interparticle contact forces, which form complex force networks that govern the material's macroscopic behavior. Our framework is specifically designed to analyze such 2D ensembles of dynamic force networks, enabling the identification and characterization of their underlying structures. The framework is built around a topology-based, multiscale data segmentation in terms of force chains and cycles. The analysis methods are structured across three levels: (1) multiscale analysis of individual instances under specific loading conditions, (2) detailed exploration of single experiments encompassing a series of loading and unloading cycles, and (3) comparative analysis across experiments conducted under similar and differing setups. We demonstrate the capabilities of our framework with a case study for each of these levels.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Sketch to Reality: Enabling High-Quality, Cross-Category 3D Model Generation from Free-Hand Sketches with Minimal Data. 从草图到现实:使用最少的数据从手绘草图实现高质量,跨类别3D模型生成。
IF 6.5 Pub Date : 2026-02-06 DOI: 10.1109/TVCG.2026.3661544
Ying Zang, Chunan Yu, Jiahao Zhang, Jing Li, Shengyuan Zhang, Lanyun Zhu, Chaotao Ding, Renjun Xu, Tianrun Chen

This paper presents a novel approach for generating high-quality, cross-category 3D models from free-hand sketches with limited training data. We propose the first semi-supervised learning method to our knowledge for sketch-to-3D model conversion. Innovatively, we design a coarse-to-fine pipeline to perform the semi-supervised learning in the coarse stage and train a diffusion-based refiner to get a high-resolution 3D model. We designed a sketch-augmentation method for semi-supervised learning and integrated priors such as CLIP loss, shape prototypes, and adversarial loss to help generate high-quality results even with abstract and imprecise sketches. We also introduce an innovative procedural 3D generation method based on CAD code, which helps pre-train part of the network before fine-tuning with limited real data. Our approach, coupled with a specifically designed curriculum learning, allows us to generate high-quality 3D models across multiple categories with as few as 300 sketch-3D model pairs, marking a significant advancement over previous single-category approaches. In addition, we introduce the KO2D dataset, the largest collection of hand-drawn sketch-3D pairs to support further research in this area. As sketches are a far more intuitive and detailed way for users to express their unique ideas, we believe that this paper can move us closer to democratizing 3D content creation, enabling anyone to transform their ideas into 3D models effortlessly.

本文提出了一种从训练数据有限的手绘草图生成高质量、跨类别3D模型的新方法。我们提出了第一种半监督学习方法,将我们的知识用于草图到3d模型的转换。创新地,我们设计了一个从粗到精的管道来执行粗阶段的半监督学习,并训练一个基于扩散的细化器来获得高分辨率的3D模型。我们设计了一种用于半监督学习和集成先验(如CLIP损失、形状原型和对抗损失)的草图增强方法,以帮助生成高质量的结果,即使是抽象和不精确的草图。我们还介绍了一种创新的基于CAD代码的程序性三维生成方法,该方法有助于在使用有限的实际数据进行微调之前对部分网络进行预训练。我们的方法,再加上专门设计的课程学习,使我们能够在多个类别中生成高质量的3D模型,只需300个草图-3D模型对,这标志着比以前的单一类别方法有了显著的进步。此外,我们还引入了KO2D数据集,这是手绘草图- 3d对的最大集合,以支持该领域的进一步研究。对于用户来说,草图是一种更加直观和详细的方式来表达他们独特的想法,我们相信这篇论文可以使我们更接近3D内容创作的民主化,使任何人都可以毫不费力地将他们的想法转化为3D模型。
{"title":"From Sketch to Reality: Enabling High-Quality, Cross-Category 3D Model Generation from Free-Hand Sketches with Minimal Data.","authors":"Ying Zang, Chunan Yu, Jiahao Zhang, Jing Li, Shengyuan Zhang, Lanyun Zhu, Chaotao Ding, Renjun Xu, Tianrun Chen","doi":"10.1109/TVCG.2026.3661544","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3661544","url":null,"abstract":"<p><p>This paper presents a novel approach for generating high-quality, cross-category 3D models from free-hand sketches with limited training data. We propose the first semi-supervised learning method to our knowledge for sketch-to-3D model conversion. Innovatively, we design a coarse-to-fine pipeline to perform the semi-supervised learning in the coarse stage and train a diffusion-based refiner to get a high-resolution 3D model. We designed a sketch-augmentation method for semi-supervised learning and integrated priors such as CLIP loss, shape prototypes, and adversarial loss to help generate high-quality results even with abstract and imprecise sketches. We also introduce an innovative procedural 3D generation method based on CAD code, which helps pre-train part of the network before fine-tuning with limited real data. Our approach, coupled with a specifically designed curriculum learning, allows us to generate high-quality 3D models across multiple categories with as few as 300 sketch-3D model pairs, marking a significant advancement over previous single-category approaches. In addition, we introduce the KO2D dataset, the largest collection of hand-drawn sketch-3D pairs to support further research in this area. As sketches are a far more intuitive and detailed way for users to express their unique ideas, we believe that this paper can move us closer to democratizing 3D content creation, enabling anyone to transform their ideas into 3D models effortlessly.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoDA: Interactive Segmentation and Morphological Analysis of Dendroid Structures Exemplified on Stony Cold-Water Corals. 结语:以冷水石珊瑚为例的树突结构的相互分割和形态分析。
IF 6.5 Pub Date : 2026-02-03 DOI: 10.1109/TVCG.2026.3656066
Kira Schmitt, Jurgen Titschack, Daniel Baum

Dendroid stony corals build highly complex colonies that develop by asexual reproduction from a single coral polyp sitting in a cup-like exoskeleton, called corallite, resulting in a tree-like branching pattern of its exoskeleton. Despite their beauty and ecological importance as reef builders in tropical shallow-water and in huge cold-water coral mounds in the deep ocean, systematic studies investigating the ontogenetic morphological development of such coral colonies are largely missing. The main reasons for this lack of study are the large number of corallites, and the existence of many secondary joints/coenosteal bridges in the ideally tree-like structure that make a reconstruction of the skeleton tree extremely tedious. Herein, we present CoDA, the Coral Dendroid structure Analyzer, a visual analytics toolkit that allows for the first time to systematically create skeleton trees representing the correct biological relationship of even very complex dendroid stony corals and to perform ontogenetic morphological analyses based on it. Starting with an initial instance segmentation of the calices/corallites, CoDA estimates the skeleton tree and provides convenient tools and visualizations for proofreading and correcting segmentation and skeleton tree. Part of CoDA is CoDA.Graph, a feature-rich link-and-brush user interface for showing the extracted morphological features and graph layouts of the skeleton tree, enabling real-time exploration of complex coral colonies and their building blocks, the individual corallites and branches. The use of CoDA is exemplified on multiple specimens of the three most important reef-building cold-water coral species with largely varying morphotypes.

树突石珊瑚建立高度复杂的群落,这些群落是由一个位于杯状外骨骼(称为珊瑚岩)中的单个珊瑚息肉通过无性繁殖发展而来的,导致其外骨骼呈树状分支模式。尽管它们在热带浅水和深海巨大的冷水珊瑚丘中作为珊瑚礁建造者的美丽和生态重要性,但对这些珊瑚群落的个体发育形态学发展的系统研究在很大程度上是缺失的。缺乏研究的主要原因是珊瑚岩数量众多,理想的树状结构中存在许多次级节理/巢鞘桥,使得骨架树的重建非常繁琐。在此,我们提出CoDA,珊瑚树突结构分析仪,一个可视化分析工具包,允许首次系统地创建代表正确的生物关系的骨骼树,甚至非常复杂的树突石珊瑚,并在此基础上进行个体发生形态学分析。CoDA从对萼/珊瑚的初始分割实例开始,估计骨架树,并提供方便的工具和可视化来校对和纠正分割和骨架树。CoDA的一部分就是CoDA。Graph,一个功能丰富的链接和刷用户界面,用于显示提取的形态学特征和骨架树的图形布局,可以实时探索复杂的珊瑚群落及其构建块,单个珊瑚和分支。在三种最重要的造礁冷水珊瑚物种的多个标本上使用了CoDA的例子,这些物种的形态差异很大。
{"title":"CoDA: Interactive Segmentation and Morphological Analysis of Dendroid Structures Exemplified on Stony Cold-Water Corals.","authors":"Kira Schmitt, Jurgen Titschack, Daniel Baum","doi":"10.1109/TVCG.2026.3656066","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3656066","url":null,"abstract":"<p><p>Dendroid stony corals build highly complex colonies that develop by asexual reproduction from a single coral polyp sitting in a cup-like exoskeleton, called corallite, resulting in a tree-like branching pattern of its exoskeleton. Despite their beauty and ecological importance as reef builders in tropical shallow-water and in huge cold-water coral mounds in the deep ocean, systematic studies investigating the ontogenetic morphological development of such coral colonies are largely missing. The main reasons for this lack of study are the large number of corallites, and the existence of many secondary joints/coenosteal bridges in the ideally tree-like structure that make a reconstruction of the skeleton tree extremely tedious. Herein, we present CoDA, the Coral Dendroid structure Analyzer, a visual analytics toolkit that allows for the first time to systematically create skeleton trees representing the correct biological relationship of even very complex dendroid stony corals and to perform ontogenetic morphological analyses based on it. Starting with an initial instance segmentation of the calices/corallites, CoDA estimates the skeleton tree and provides convenient tools and visualizations for proofreading and correcting segmentation and skeleton tree. Part of CoDA is CoDA.Graph, a feature-rich link-and-brush user interface for showing the extracted morphological features and graph layouts of the skeleton tree, enabling real-time exploration of complex coral colonies and their building blocks, the individual corallites and branches. The use of CoDA is exemplified on multiple specimens of the three most important reef-building cold-water coral species with largely varying morphotypes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DGM-RDW: Redirected Walking with Dynamic Geometric Mapping Between Environments. DGM-RDW:环境间动态几何映射的重定向行走。
IF 6.5 Pub Date : 2026-02-03 DOI: 10.1109/TVCG.2026.3660749
Miao Wang, Qian Wang, Yi-Jun Li

Redirected walking (RDW) subtly adjusts the user's visual perspective on head-mounted displays during natural walking to reduce forced resets, thus enlarging the size of the virtual environment that can be explored beyond that of the physical environment. Alignment-based RDW controllers aim to minimize spatial discrepancies by optimizing the alignment between the user's physical and virtual environments. We introduce a novel alignment-based method that dynamically calculates mapping functions between physical and virtual geometries to enhance the algorithm's awareness of the RDW environments. To achieve this, we first construct an abstract model defining a mapping function between physical and virtual geometries and establish feasibility constraints in differential form. We then concretize this mapping, optimize it, and develop a practical implementation for dynamic geometric mapping in RDW. Our approach distinguishes itself by determining dense spatial mappings around the user, rather than aligning environments according to limited metrics. Through extensive testing, our algorithm has proven to markedly decrease reset incidents in natural walking, surpassing existing RDW controllers. The introduction of dynamic geometric mapping provides a fresh perspective, contributing significant insights and advancing the field.

重定向行走(RDW)巧妙地调整用户在自然行走时头戴式显示器上的视觉视角,以减少强制重置,从而扩大了可以探索的虚拟环境的大小,超出了物理环境。基于对齐的RDW控制器旨在通过优化用户的物理和虚拟环境之间的对齐来最小化空间差异。我们引入了一种新的基于对齐的方法,动态计算物理几何和虚拟几何之间的映射函数,以增强算法对RDW环境的感知。为此,我们首先构建了一个抽象模型,定义了物理几何和虚拟几何之间的映射函数,并以微分形式建立了可行性约束。然后我们具体化这个映射,优化它,并在RDW中开发一个动态几何映射的实际实现。我们的方法通过确定用户周围的密集空间映射来区分自己,而不是根据有限的度量来对齐环境。经过广泛的测试,我们的算法已经证明可以显著减少自然行走中的重置事件,超过现有的RDW控制器。动态几何映射的引入提供了一个新的视角,贡献了重要的见解,并推动了该领域的发展。
{"title":"DGM-RDW: Redirected Walking with Dynamic Geometric Mapping Between Environments.","authors":"Miao Wang, Qian Wang, Yi-Jun Li","doi":"10.1109/TVCG.2026.3660749","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3660749","url":null,"abstract":"<p><p>Redirected walking (RDW) subtly adjusts the user's visual perspective on head-mounted displays during natural walking to reduce forced resets, thus enlarging the size of the virtual environment that can be explored beyond that of the physical environment. Alignment-based RDW controllers aim to minimize spatial discrepancies by optimizing the alignment between the user's physical and virtual environments. We introduce a novel alignment-based method that dynamically calculates mapping functions between physical and virtual geometries to enhance the algorithm's awareness of the RDW environments. To achieve this, we first construct an abstract model defining a mapping function between physical and virtual geometries and establish feasibility constraints in differential form. We then concretize this mapping, optimize it, and develop a practical implementation for dynamic geometric mapping in RDW. Our approach distinguishes itself by determining dense spatial mappings around the user, rather than aligning environments according to limited metrics. Through extensive testing, our algorithm has proven to markedly decrease reset incidents in natural walking, surpassing existing RDW controllers. The introduction of dynamic geometric mapping provides a fresh perspective, contributing significant insights and advancing the field.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stable-Hair v2: Real-World Hair Transfer via Multiple-View Diffusion Model. 稳定的头发v2:真实世界的头发转移通过多视图扩散模型。
IF 6.5 Pub Date : 2026-02-03 DOI: 10.1109/TVCG.2026.3659861
Kuiyuan Sun, Yuxuan Zhang, Jichao Zhang, Jiaming Liu, Wei Wang, Nicu Sebe, Yao Zhao

While diffusion-based methods have shown impressive capabilities in capturing diverse and complex hairstyles, their ability to generate consistent and high-quality multi-view out puts-crucial for real-world applications such as digital humans and virtual avatars-remains underexplored. In this paper, we propose Stable-Hair v2, a novel diffusion-based multi-view hair transfer framework. To the best of our knowledge, this is the first work to leverage multiple-view diffusion models for robust, high-fidelity, and view-consistent hair transfer across multiple perspectives. We introduce a comprehensive multi-view training data generation pipeline to generate high-quality triplet data, including bald images, reference hairstyles, and view-aligned source-bald pairs. Our multi-view hair transfer model integrates polar-azimuth embeddings for pose conditioning and temporal attention layers to ensure smooth transitions between views. To optimize this model, we design a novel multi-stage training strategy consisting of Pose-Controllable Latent IdentityNet training, Hair Extractor training, and Temporal Attention training. Extensive experiments demonstrate that our method accurately transfers detailed and realistic hairstyles to source subjects while achieving seamless and consistent results across views, significantly outperforming existing methods and establishing a new benchmark in multi-view hair transfer. Code is publicly available at https://github.com/sunkymepro/StableHairV2.

虽然基于扩散的方法在捕捉多样化和复杂的发型方面表现出了令人印象深刻的能力,但它们产生一致和高质量的多视图输出的能力——对于数字人类和虚拟化身等现实世界应用至关重要——仍有待探索。本文提出了一种新的基于扩散的多视角头发转移框架——Stable-Hair v2。据我们所知,这是第一个利用多视图扩散模型进行鲁棒性,高保真度和视图一致的头发转移的工作。我们引入了一个全面的多视图训练数据生成管道来生成高质量的三重数据,包括秃顶图像、参考发型和视图对齐的源-秃顶对。我们的多视图头发转移模型集成了极方位角嵌入,用于姿态调节和时间注意层,以确保视图之间的平滑过渡。为了优化该模型,我们设计了一种新的多阶段训练策略,包括姿势可控的潜在身份网络训练、脱毛器训练和时间注意力训练。大量的实验表明,我们的方法准确地将细节和真实的发型转移到源对象,同时在多个视图中实现无缝和一致的结果,显著优于现有方法,并建立了多视图头发转移的新基准。代码可在https://github.com/sunkymepro/StableHairV2上公开获取。
{"title":"Stable-Hair v2: Real-World Hair Transfer via Multiple-View Diffusion Model.","authors":"Kuiyuan Sun, Yuxuan Zhang, Jichao Zhang, Jiaming Liu, Wei Wang, Nicu Sebe, Yao Zhao","doi":"10.1109/TVCG.2026.3659861","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3659861","url":null,"abstract":"<p><p>While diffusion-based methods have shown impressive capabilities in capturing diverse and complex hairstyles, their ability to generate consistent and high-quality multi-view out puts-crucial for real-world applications such as digital humans and virtual avatars-remains underexplored. In this paper, we propose Stable-Hair v2, a novel diffusion-based multi-view hair transfer framework. To the best of our knowledge, this is the first work to leverage multiple-view diffusion models for robust, high-fidelity, and view-consistent hair transfer across multiple perspectives. We introduce a comprehensive multi-view training data generation pipeline to generate high-quality triplet data, including bald images, reference hairstyles, and view-aligned source-bald pairs. Our multi-view hair transfer model integrates polar-azimuth embeddings for pose conditioning and temporal attention layers to ensure smooth transitions between views. To optimize this model, we design a novel multi-stage training strategy consisting of Pose-Controllable Latent IdentityNet training, Hair Extractor training, and Temporal Attention training. Extensive experiments demonstrate that our method accurately transfers detailed and realistic hairstyles to source subjects while achieving seamless and consistent results across views, significantly outperforming existing methods and establishing a new benchmark in multi-view hair transfer. Code is publicly available at https://github.com/sunkymepro/StableHairV2.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cybersickness Abatement from Repeated Exposure Generalizes across Experiences. 通过反复接触减少晕屏病的经验是普遍的。
IF 6.5 Pub Date : 2026-02-02 DOI: 10.1109/TVCG.2026.3659810
Jonathan W Kelly, Taylor A Doty, Michael C Dorneich, Stephen B Gilbert

Cybersickness, or sickness caused by virtual reality (VR), represents a significant threat to the usability of VR applications. Repeated exposure to the same VR stimulus causes a reduction in cybersickness, referred to as Cybersickness Abatement from Repeated Exposure (CARE). This study examined whether the benefits of CARE generalize across distinct VR contexts, which was operationalized as three distinct games (a climbing game, a puzzle game, and a stealth survival game). Participants played a VR game for up to 20 minutes. Those in the Repeated Exposure condition played one VR game (either a puzzle game or a climbing game) on three separate days followed by a different VR game (a survival game) on the fourth day. Those in the Single Exposure condition played the survival game once. The three games all differed in several ways, including environment and task, whereas the puzzle and survival games shared a similar joystick locomotion interface that differed from the locomotion interface in the climbing game. Results indicate that cybersickness on Day 4 of the Repeated Exposure condition was significantly lower than that in the Single Exposure condition, regardless of which game was experienced on Days 1-3. The practical implication of this finding is that CARE that occurs in one VR context can generalize to a novel context with a distinct environment, task, and locomotion interface. Results are considered in the context of multiple theoretical explanations for CARE, including sensory rearrangement and habituation. These results support systematic exposure as an approach to reducing cybersickness.

晕屏症,或由虚拟现实(VR)引起的疾病,对VR应用的可用性构成了重大威胁。重复暴露于相同的虚拟现实刺激会减少晕屏,这被称为“重复暴露缓解晕屏”(CARE)。本研究考察了CARE的好处是否能在不同的VR环境中得到推广,并将其分为三种不同的游戏(攀爬游戏、解谜游戏和潜行生存游戏)。参与者玩了长达20分钟的虚拟现实游戏。那些在重复暴露条件下的人分别在三天玩一个VR游戏(益智游戏或攀爬游戏),然后在第四天玩一个不同的VR游戏(生存游戏)。那些在单次暴露条件下的人玩了一次生存游戏。这三款游戏在许多方面都有所不同,包括环境和任务,而谜题和生存游戏都有类似的操纵杆运动界面,这与攀爬游戏的运动界面不同。结果表明,无论在第1-3天体验哪种游戏,重复暴露条件下第4天的晕屏症状明显低于单一暴露条件下的晕屏症状。这一发现的实际含义是,在一个VR环境中发生的CARE可以推广到具有不同环境、任务和运动界面的新环境中。结果考虑了多种理论解释的背景下,包括感觉重排和习惯化。这些结果支持系统暴露作为减少晕屏的方法。
{"title":"Cybersickness Abatement from Repeated Exposure Generalizes across Experiences.","authors":"Jonathan W Kelly, Taylor A Doty, Michael C Dorneich, Stephen B Gilbert","doi":"10.1109/TVCG.2026.3659810","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3659810","url":null,"abstract":"<p><p>Cybersickness, or sickness caused by virtual reality (VR), represents a significant threat to the usability of VR applications. Repeated exposure to the same VR stimulus causes a reduction in cybersickness, referred to as Cybersickness Abatement from Repeated Exposure (CARE). This study examined whether the benefits of CARE generalize across distinct VR contexts, which was operationalized as three distinct games (a climbing game, a puzzle game, and a stealth survival game). Participants played a VR game for up to 20 minutes. Those in the Repeated Exposure condition played one VR game (either a puzzle game or a climbing game) on three separate days followed by a different VR game (a survival game) on the fourth day. Those in the Single Exposure condition played the survival game once. The three games all differed in several ways, including environment and task, whereas the puzzle and survival games shared a similar joystick locomotion interface that differed from the locomotion interface in the climbing game. Results indicate that cybersickness on Day 4 of the Repeated Exposure condition was significantly lower than that in the Single Exposure condition, regardless of which game was experienced on Days 1-3. The practical implication of this finding is that CARE that occurs in one VR context can generalize to a novel context with a distinct environment, task, and locomotion interface. Results are considered in the context of multiple theoretical explanations for CARE, including sensory rearrangement and habituation. These results support systematic exposure as an approach to reducing cybersickness.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DrainScope: Visual Analytics of Urban Drainage System. 排水范围:城市排水系统可视化分析。
IF 6.5 Pub Date : 2026-02-02 DOI: 10.1109/TVCG.2026.3659985
Mingwei Lin, Zikun Deng, Qin Huang, Yiyi Ma, Lin-Ping Yuan, Jie Bao, Yu Zheng, Yi Cai

Urban drainage systems, often designed for out dated rainfall assumptions, are increasingly unable to cope with extreme rainfall events. This leads to flooding, infrastructure damage, and economic losses, necessitating effective diagnostic and improvement strategies. In current practice, conventional analysis platforms built on hydrological-hydraulic models provide only limited analytical support, making it difficult to pin point defects, inspect causal mechanisms, or evaluate alternative design options in an integrated manner. In this paper, we develop DrainScope, to our knowledge, the first visual analytics approach for comprehensive diagnosis and iterative improvement of urban drainage systems. Defects are initially observed in the map view, after which DrainScope extracts the critical sub-systems associated with them using a rule-based search strategy, enabling focused analysis. It introduces a novel drainage-oriented Sankey diagram to visualize internal flow dynamics within the focused, static drainage system, revealing the causes of identified system defects. Furthermore, it enables flexible modification of drainage components corresponding to identified defects, coupled with a comparison view for rapid, iterative evaluation of improvement plans. We evaluate DrainScope through a real-world case study and positive feedback collected from domain experts.

城市排水系统通常是根据过时的降雨假设设计的,越来越无法应对极端降雨事件。这会导致洪水、基础设施破坏和经济损失,因此需要有效的诊断和改进策略。在目前的实践中,基于水文-水力模型的传统分析平台只能提供有限的分析支持,这使得很难以综合的方式确定缺陷,检查因果机制或评估备选设计方案。在本文中,我们开发了排水范围,据我们所知,这是第一个用于城市排水系统综合诊断和迭代改进的可视化分析方法。首先在地图视图中观察缺陷,然后使用基于规则的搜索策略提取与缺陷相关的关键子系统,从而实现集中分析。它引入了一种新颖的以排水为导向的Sankey图,以可视化集中的静态排水系统内的内部流动动力学,揭示已识别系统缺陷的原因。此外,它能够灵活地修改排水组件,以对应于已识别的缺陷,并与改进计划的快速、迭代评估的比较视图相结合。我们通过实际案例研究和从领域专家那里收集的积极反馈来评估DrainScope。
{"title":"DrainScope: Visual Analytics of Urban Drainage System.","authors":"Mingwei Lin, Zikun Deng, Qin Huang, Yiyi Ma, Lin-Ping Yuan, Jie Bao, Yu Zheng, Yi Cai","doi":"10.1109/TVCG.2026.3659985","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3659985","url":null,"abstract":"<p><p>Urban drainage systems, often designed for out dated rainfall assumptions, are increasingly unable to cope with extreme rainfall events. This leads to flooding, infrastructure damage, and economic losses, necessitating effective diagnostic and improvement strategies. In current practice, conventional analysis platforms built on hydrological-hydraulic models provide only limited analytical support, making it difficult to pin point defects, inspect causal mechanisms, or evaluate alternative design options in an integrated manner. In this paper, we develop DrainScope, to our knowledge, the first visual analytics approach for comprehensive diagnosis and iterative improvement of urban drainage systems. Defects are initially observed in the map view, after which DrainScope extracts the critical sub-systems associated with them using a rule-based search strategy, enabling focused analysis. It introduces a novel drainage-oriented Sankey diagram to visualize internal flow dynamics within the focused, static drainage system, revealing the causes of identified system defects. Furthermore, it enables flexible modification of drainage components corresponding to identified defects, coupled with a comparison view for rapid, iterative evaluation of improvement plans. We evaluate DrainScope through a real-world case study and positive feedback collected from domain experts.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1