Pub Date : 2026-02-06DOI: 10.1109/TVCG.2026.3660683
Farhan Rasheed, Abrar Naseer, Talha Bin Masood, Tejas G Murthy, Vijay Natarajan, Ingrid Hotz
This paper presents an interactive analysis framework for exploring data from photoelastic disk experiments, which serve as a model for two-dimensional granular materials. Granular materials, composed of discrete particles such as sand or gravel, exhibit behaviors resembling fluid or solid states depending on the system configuration. These behaviors arise from interparticle contact forces, which form complex force networks that govern the material's macroscopic behavior. Our framework is specifically designed to analyze such 2D ensembles of dynamic force networks, enabling the identification and characterization of their underlying structures. The framework is built around a topology-based, multiscale data segmentation in terms of force chains and cycles. The analysis methods are structured across three levels: (1) multiscale analysis of individual instances under specific loading conditions, (2) detailed exploration of single experiments encompassing a series of loading and unloading cycles, and (3) comparative analysis across experiments conducted under similar and differing setups. We demonstrate the capabilities of our framework with a case study for each of these levels.
{"title":"Explorative Analysis of Dynamic Force Networks in 2D Photoelastic Disks Ensembles.","authors":"Farhan Rasheed, Abrar Naseer, Talha Bin Masood, Tejas G Murthy, Vijay Natarajan, Ingrid Hotz","doi":"10.1109/TVCG.2026.3660683","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3660683","url":null,"abstract":"<p><p>This paper presents an interactive analysis framework for exploring data from photoelastic disk experiments, which serve as a model for two-dimensional granular materials. Granular materials, composed of discrete particles such as sand or gravel, exhibit behaviors resembling fluid or solid states depending on the system configuration. These behaviors arise from interparticle contact forces, which form complex force networks that govern the material's macroscopic behavior. Our framework is specifically designed to analyze such 2D ensembles of dynamic force networks, enabling the identification and characterization of their underlying structures. The framework is built around a topology-based, multiscale data segmentation in terms of force chains and cycles. The analysis methods are structured across three levels: (1) multiscale analysis of individual instances under specific loading conditions, (2) detailed exploration of single experiments encompassing a series of loading and unloading cycles, and (3) comparative analysis across experiments conducted under similar and differing setups. We demonstrate the capabilities of our framework with a case study for each of these levels.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel approach for generating high-quality, cross-category 3D models from free-hand sketches with limited training data. We propose the first semi-supervised learning method to our knowledge for sketch-to-3D model conversion. Innovatively, we design a coarse-to-fine pipeline to perform the semi-supervised learning in the coarse stage and train a diffusion-based refiner to get a high-resolution 3D model. We designed a sketch-augmentation method for semi-supervised learning and integrated priors such as CLIP loss, shape prototypes, and adversarial loss to help generate high-quality results even with abstract and imprecise sketches. We also introduce an innovative procedural 3D generation method based on CAD code, which helps pre-train part of the network before fine-tuning with limited real data. Our approach, coupled with a specifically designed curriculum learning, allows us to generate high-quality 3D models across multiple categories with as few as 300 sketch-3D model pairs, marking a significant advancement over previous single-category approaches. In addition, we introduce the KO2D dataset, the largest collection of hand-drawn sketch-3D pairs to support further research in this area. As sketches are a far more intuitive and detailed way for users to express their unique ideas, we believe that this paper can move us closer to democratizing 3D content creation, enabling anyone to transform their ideas into 3D models effortlessly.
{"title":"From Sketch to Reality: Enabling High-Quality, Cross-Category 3D Model Generation from Free-Hand Sketches with Minimal Data.","authors":"Ying Zang, Chunan Yu, Jiahao Zhang, Jing Li, Shengyuan Zhang, Lanyun Zhu, Chaotao Ding, Renjun Xu, Tianrun Chen","doi":"10.1109/TVCG.2026.3661544","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3661544","url":null,"abstract":"<p><p>This paper presents a novel approach for generating high-quality, cross-category 3D models from free-hand sketches with limited training data. We propose the first semi-supervised learning method to our knowledge for sketch-to-3D model conversion. Innovatively, we design a coarse-to-fine pipeline to perform the semi-supervised learning in the coarse stage and train a diffusion-based refiner to get a high-resolution 3D model. We designed a sketch-augmentation method for semi-supervised learning and integrated priors such as CLIP loss, shape prototypes, and adversarial loss to help generate high-quality results even with abstract and imprecise sketches. We also introduce an innovative procedural 3D generation method based on CAD code, which helps pre-train part of the network before fine-tuning with limited real data. Our approach, coupled with a specifically designed curriculum learning, allows us to generate high-quality 3D models across multiple categories with as few as 300 sketch-3D model pairs, marking a significant advancement over previous single-category approaches. In addition, we introduce the KO2D dataset, the largest collection of hand-drawn sketch-3D pairs to support further research in this area. As sketches are a far more intuitive and detailed way for users to express their unique ideas, we believe that this paper can move us closer to democratizing 3D content creation, enabling anyone to transform their ideas into 3D models effortlessly.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1109/TVCG.2026.3656066
Kira Schmitt, Jurgen Titschack, Daniel Baum
Dendroid stony corals build highly complex colonies that develop by asexual reproduction from a single coral polyp sitting in a cup-like exoskeleton, called corallite, resulting in a tree-like branching pattern of its exoskeleton. Despite their beauty and ecological importance as reef builders in tropical shallow-water and in huge cold-water coral mounds in the deep ocean, systematic studies investigating the ontogenetic morphological development of such coral colonies are largely missing. The main reasons for this lack of study are the large number of corallites, and the existence of many secondary joints/coenosteal bridges in the ideally tree-like structure that make a reconstruction of the skeleton tree extremely tedious. Herein, we present CoDA, the Coral Dendroid structure Analyzer, a visual analytics toolkit that allows for the first time to systematically create skeleton trees representing the correct biological relationship of even very complex dendroid stony corals and to perform ontogenetic morphological analyses based on it. Starting with an initial instance segmentation of the calices/corallites, CoDA estimates the skeleton tree and provides convenient tools and visualizations for proofreading and correcting segmentation and skeleton tree. Part of CoDA is CoDA.Graph, a feature-rich link-and-brush user interface for showing the extracted morphological features and graph layouts of the skeleton tree, enabling real-time exploration of complex coral colonies and their building blocks, the individual corallites and branches. The use of CoDA is exemplified on multiple specimens of the three most important reef-building cold-water coral species with largely varying morphotypes.
{"title":"CoDA: Interactive Segmentation and Morphological Analysis of Dendroid Structures Exemplified on Stony Cold-Water Corals.","authors":"Kira Schmitt, Jurgen Titschack, Daniel Baum","doi":"10.1109/TVCG.2026.3656066","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3656066","url":null,"abstract":"<p><p>Dendroid stony corals build highly complex colonies that develop by asexual reproduction from a single coral polyp sitting in a cup-like exoskeleton, called corallite, resulting in a tree-like branching pattern of its exoskeleton. Despite their beauty and ecological importance as reef builders in tropical shallow-water and in huge cold-water coral mounds in the deep ocean, systematic studies investigating the ontogenetic morphological development of such coral colonies are largely missing. The main reasons for this lack of study are the large number of corallites, and the existence of many secondary joints/coenosteal bridges in the ideally tree-like structure that make a reconstruction of the skeleton tree extremely tedious. Herein, we present CoDA, the Coral Dendroid structure Analyzer, a visual analytics toolkit that allows for the first time to systematically create skeleton trees representing the correct biological relationship of even very complex dendroid stony corals and to perform ontogenetic morphological analyses based on it. Starting with an initial instance segmentation of the calices/corallites, CoDA estimates the skeleton tree and provides convenient tools and visualizations for proofreading and correcting segmentation and skeleton tree. Part of CoDA is CoDA.Graph, a feature-rich link-and-brush user interface for showing the extracted morphological features and graph layouts of the skeleton tree, enabling real-time exploration of complex coral colonies and their building blocks, the individual corallites and branches. The use of CoDA is exemplified on multiple specimens of the three most important reef-building cold-water coral species with largely varying morphotypes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1109/TVCG.2026.3660749
Miao Wang, Qian Wang, Yi-Jun Li
Redirected walking (RDW) subtly adjusts the user's visual perspective on head-mounted displays during natural walking to reduce forced resets, thus enlarging the size of the virtual environment that can be explored beyond that of the physical environment. Alignment-based RDW controllers aim to minimize spatial discrepancies by optimizing the alignment between the user's physical and virtual environments. We introduce a novel alignment-based method that dynamically calculates mapping functions between physical and virtual geometries to enhance the algorithm's awareness of the RDW environments. To achieve this, we first construct an abstract model defining a mapping function between physical and virtual geometries and establish feasibility constraints in differential form. We then concretize this mapping, optimize it, and develop a practical implementation for dynamic geometric mapping in RDW. Our approach distinguishes itself by determining dense spatial mappings around the user, rather than aligning environments according to limited metrics. Through extensive testing, our algorithm has proven to markedly decrease reset incidents in natural walking, surpassing existing RDW controllers. The introduction of dynamic geometric mapping provides a fresh perspective, contributing significant insights and advancing the field.
{"title":"DGM-RDW: Redirected Walking with Dynamic Geometric Mapping Between Environments.","authors":"Miao Wang, Qian Wang, Yi-Jun Li","doi":"10.1109/TVCG.2026.3660749","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3660749","url":null,"abstract":"<p><p>Redirected walking (RDW) subtly adjusts the user's visual perspective on head-mounted displays during natural walking to reduce forced resets, thus enlarging the size of the virtual environment that can be explored beyond that of the physical environment. Alignment-based RDW controllers aim to minimize spatial discrepancies by optimizing the alignment between the user's physical and virtual environments. We introduce a novel alignment-based method that dynamically calculates mapping functions between physical and virtual geometries to enhance the algorithm's awareness of the RDW environments. To achieve this, we first construct an abstract model defining a mapping function between physical and virtual geometries and establish feasibility constraints in differential form. We then concretize this mapping, optimize it, and develop a practical implementation for dynamic geometric mapping in RDW. Our approach distinguishes itself by determining dense spatial mappings around the user, rather than aligning environments according to limited metrics. Through extensive testing, our algorithm has proven to markedly decrease reset incidents in natural walking, surpassing existing RDW controllers. The introduction of dynamic geometric mapping provides a fresh perspective, contributing significant insights and advancing the field.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1109/TVCG.2026.3659861
Kuiyuan Sun, Yuxuan Zhang, Jichao Zhang, Jiaming Liu, Wei Wang, Nicu Sebe, Yao Zhao
While diffusion-based methods have shown impressive capabilities in capturing diverse and complex hairstyles, their ability to generate consistent and high-quality multi-view out puts-crucial for real-world applications such as digital humans and virtual avatars-remains underexplored. In this paper, we propose Stable-Hair v2, a novel diffusion-based multi-view hair transfer framework. To the best of our knowledge, this is the first work to leverage multiple-view diffusion models for robust, high-fidelity, and view-consistent hair transfer across multiple perspectives. We introduce a comprehensive multi-view training data generation pipeline to generate high-quality triplet data, including bald images, reference hairstyles, and view-aligned source-bald pairs. Our multi-view hair transfer model integrates polar-azimuth embeddings for pose conditioning and temporal attention layers to ensure smooth transitions between views. To optimize this model, we design a novel multi-stage training strategy consisting of Pose-Controllable Latent IdentityNet training, Hair Extractor training, and Temporal Attention training. Extensive experiments demonstrate that our method accurately transfers detailed and realistic hairstyles to source subjects while achieving seamless and consistent results across views, significantly outperforming existing methods and establishing a new benchmark in multi-view hair transfer. Code is publicly available at https://github.com/sunkymepro/StableHairV2.
{"title":"Stable-Hair v2: Real-World Hair Transfer via Multiple-View Diffusion Model.","authors":"Kuiyuan Sun, Yuxuan Zhang, Jichao Zhang, Jiaming Liu, Wei Wang, Nicu Sebe, Yao Zhao","doi":"10.1109/TVCG.2026.3659861","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3659861","url":null,"abstract":"<p><p>While diffusion-based methods have shown impressive capabilities in capturing diverse and complex hairstyles, their ability to generate consistent and high-quality multi-view out puts-crucial for real-world applications such as digital humans and virtual avatars-remains underexplored. In this paper, we propose Stable-Hair v2, a novel diffusion-based multi-view hair transfer framework. To the best of our knowledge, this is the first work to leverage multiple-view diffusion models for robust, high-fidelity, and view-consistent hair transfer across multiple perspectives. We introduce a comprehensive multi-view training data generation pipeline to generate high-quality triplet data, including bald images, reference hairstyles, and view-aligned source-bald pairs. Our multi-view hair transfer model integrates polar-azimuth embeddings for pose conditioning and temporal attention layers to ensure smooth transitions between views. To optimize this model, we design a novel multi-stage training strategy consisting of Pose-Controllable Latent IdentityNet training, Hair Extractor training, and Temporal Attention training. Extensive experiments demonstrate that our method accurately transfers detailed and realistic hairstyles to source subjects while achieving seamless and consistent results across views, significantly outperforming existing methods and establishing a new benchmark in multi-view hair transfer. Code is publicly available at https://github.com/sunkymepro/StableHairV2.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1109/TVCG.2026.3659810
Jonathan W Kelly, Taylor A Doty, Michael C Dorneich, Stephen B Gilbert
Cybersickness, or sickness caused by virtual reality (VR), represents a significant threat to the usability of VR applications. Repeated exposure to the same VR stimulus causes a reduction in cybersickness, referred to as Cybersickness Abatement from Repeated Exposure (CARE). This study examined whether the benefits of CARE generalize across distinct VR contexts, which was operationalized as three distinct games (a climbing game, a puzzle game, and a stealth survival game). Participants played a VR game for up to 20 minutes. Those in the Repeated Exposure condition played one VR game (either a puzzle game or a climbing game) on three separate days followed by a different VR game (a survival game) on the fourth day. Those in the Single Exposure condition played the survival game once. The three games all differed in several ways, including environment and task, whereas the puzzle and survival games shared a similar joystick locomotion interface that differed from the locomotion interface in the climbing game. Results indicate that cybersickness on Day 4 of the Repeated Exposure condition was significantly lower than that in the Single Exposure condition, regardless of which game was experienced on Days 1-3. The practical implication of this finding is that CARE that occurs in one VR context can generalize to a novel context with a distinct environment, task, and locomotion interface. Results are considered in the context of multiple theoretical explanations for CARE, including sensory rearrangement and habituation. These results support systematic exposure as an approach to reducing cybersickness.
{"title":"Cybersickness Abatement from Repeated Exposure Generalizes across Experiences.","authors":"Jonathan W Kelly, Taylor A Doty, Michael C Dorneich, Stephen B Gilbert","doi":"10.1109/TVCG.2026.3659810","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3659810","url":null,"abstract":"<p><p>Cybersickness, or sickness caused by virtual reality (VR), represents a significant threat to the usability of VR applications. Repeated exposure to the same VR stimulus causes a reduction in cybersickness, referred to as Cybersickness Abatement from Repeated Exposure (CARE). This study examined whether the benefits of CARE generalize across distinct VR contexts, which was operationalized as three distinct games (a climbing game, a puzzle game, and a stealth survival game). Participants played a VR game for up to 20 minutes. Those in the Repeated Exposure condition played one VR game (either a puzzle game or a climbing game) on three separate days followed by a different VR game (a survival game) on the fourth day. Those in the Single Exposure condition played the survival game once. The three games all differed in several ways, including environment and task, whereas the puzzle and survival games shared a similar joystick locomotion interface that differed from the locomotion interface in the climbing game. Results indicate that cybersickness on Day 4 of the Repeated Exposure condition was significantly lower than that in the Single Exposure condition, regardless of which game was experienced on Days 1-3. The practical implication of this finding is that CARE that occurs in one VR context can generalize to a novel context with a distinct environment, task, and locomotion interface. Results are considered in the context of multiple theoretical explanations for CARE, including sensory rearrangement and habituation. These results support systematic exposure as an approach to reducing cybersickness.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1109/TVCG.2026.3659985
Mingwei Lin, Zikun Deng, Qin Huang, Yiyi Ma, Lin-Ping Yuan, Jie Bao, Yu Zheng, Yi Cai
Urban drainage systems, often designed for out dated rainfall assumptions, are increasingly unable to cope with extreme rainfall events. This leads to flooding, infrastructure damage, and economic losses, necessitating effective diagnostic and improvement strategies. In current practice, conventional analysis platforms built on hydrological-hydraulic models provide only limited analytical support, making it difficult to pin point defects, inspect causal mechanisms, or evaluate alternative design options in an integrated manner. In this paper, we develop DrainScope, to our knowledge, the first visual analytics approach for comprehensive diagnosis and iterative improvement of urban drainage systems. Defects are initially observed in the map view, after which DrainScope extracts the critical sub-systems associated with them using a rule-based search strategy, enabling focused analysis. It introduces a novel drainage-oriented Sankey diagram to visualize internal flow dynamics within the focused, static drainage system, revealing the causes of identified system defects. Furthermore, it enables flexible modification of drainage components corresponding to identified defects, coupled with a comparison view for rapid, iterative evaluation of improvement plans. We evaluate DrainScope through a real-world case study and positive feedback collected from domain experts.
{"title":"DrainScope: Visual Analytics of Urban Drainage System.","authors":"Mingwei Lin, Zikun Deng, Qin Huang, Yiyi Ma, Lin-Ping Yuan, Jie Bao, Yu Zheng, Yi Cai","doi":"10.1109/TVCG.2026.3659985","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3659985","url":null,"abstract":"<p><p>Urban drainage systems, often designed for out dated rainfall assumptions, are increasingly unable to cope with extreme rainfall events. This leads to flooding, infrastructure damage, and economic losses, necessitating effective diagnostic and improvement strategies. In current practice, conventional analysis platforms built on hydrological-hydraulic models provide only limited analytical support, making it difficult to pin point defects, inspect causal mechanisms, or evaluate alternative design options in an integrated manner. In this paper, we develop DrainScope, to our knowledge, the first visual analytics approach for comprehensive diagnosis and iterative improvement of urban drainage systems. Defects are initially observed in the map view, after which DrainScope extracts the critical sub-systems associated with them using a rule-based search strategy, enabling focused analysis. It introduces a novel drainage-oriented Sankey diagram to visualize internal flow dynamics within the focused, static drainage system, revealing the causes of identified system defects. Furthermore, it enables flexible modification of drainage components corresponding to identified defects, coupled with a comparison view for rapid, iterative evaluation of improvement plans. We evaluate DrainScope through a real-world case study and positive feedback collected from domain experts.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1109/TVCG.2026.3659931
Xiang Xu, Huiyu Li, Linwei Fan, Lu Wang
Data-parallel ray tracing is a crucial technique for rendering large-scale scenes that exceed the memory capacity of a single compute node. It partitions scene data across multiple nodes and accesses remote data through inter-node communication. However, the resulting communication overhead remains a significant bottleneck for practical performance. Existing approaches mitigate this bottleneck by enhancing data locality through dynamic scheduling during rendering, typically employing spatial partitioning to enable access prediction. Although effective in some scenarios, these methods incur significant redundancy in base geometry when applied to large-scale instanced scenes. In this paper, we introduce the first object-space-based dynamic scheduling algorithm, which uses object groups as the scheduling units to eliminate redundant storage of base data in instanced scenes. Additionally, we propose two data access frequency prediction methods to guide asynchronous data prefetching, enhancing rendering efficiency. Compared to the state-of-the-art method, our approach achieves an average rendering speedup of 77.6%, with a maximum improvement of up to 146.1%, while incurring only a 5% increase in scene memory consumption.
{"title":"Dynamic Scheduling for Data-Parallel Path Tracing of Large-Scale Instanced Scenes.","authors":"Xiang Xu, Huiyu Li, Linwei Fan, Lu Wang","doi":"10.1109/TVCG.2026.3659931","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3659931","url":null,"abstract":"<p><p>Data-parallel ray tracing is a crucial technique for rendering large-scale scenes that exceed the memory capacity of a single compute node. It partitions scene data across multiple nodes and accesses remote data through inter-node communication. However, the resulting communication overhead remains a significant bottleneck for practical performance. Existing approaches mitigate this bottleneck by enhancing data locality through dynamic scheduling during rendering, typically employing spatial partitioning to enable access prediction. Although effective in some scenarios, these methods incur significant redundancy in base geometry when applied to large-scale instanced scenes. In this paper, we introduce the first object-space-based dynamic scheduling algorithm, which uses object groups as the scheduling units to eliminate redundant storage of base data in instanced scenes. Additionally, we propose two data access frequency prediction methods to guide asynchronous data prefetching, enhancing rendering efficiency. Compared to the state-of-the-art method, our approach achieves an average rendering speedup of 77.6%, with a maximum improvement of up to 146.1%, while incurring only a 5% increase in scene memory consumption.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1109/TVCG.2025.3624800
Haoran Mo, Yulin Shen, Edgar Simo-Serra, Zeyu Wang
Creating high-quality line art in a fast and controlled manner plays a crucial role in anime production and concept design. We present DoodleAssist, an interactive and progressive line art generation system controlled by sketches and prompts, which helps both experts and novices concretize their design intentions or explore possibilities. Built upon a controllable diffusion model, our system performs progressive generation based on the last generated line art, synthesizing regions corresponding to drawn or modified strokes while keeping the remaining ones unchanged. To facilitate this process, we propose a latent distribution alignment mechanism to enhance the transition between the two regions and allow seamless blending, thereby alleviating issues of region incoherence and line discontinuity. Finally, we also build a user interface that allows the convenient creation of line art through interactive sketching and prompts. Qualitative and quantitative comparisons against existing approaches and an in-depth user study demonstrate the effectiveness and usability of our system. Our system can benefit various applications such as anime concept design, drawing assistant, and creativity support for children.
{"title":"DoodleAssist: Progressive Interactive Line Art Generation With Latent Distribution Alignment.","authors":"Haoran Mo, Yulin Shen, Edgar Simo-Serra, Zeyu Wang","doi":"10.1109/TVCG.2025.3624800","DOIUrl":"10.1109/TVCG.2025.3624800","url":null,"abstract":"<p><p>Creating high-quality line art in a fast and controlled manner plays a crucial role in anime production and concept design. We present DoodleAssist, an interactive and progressive line art generation system controlled by sketches and prompts, which helps both experts and novices concretize their design intentions or explore possibilities. Built upon a controllable diffusion model, our system performs progressive generation based on the last generated line art, synthesizing regions corresponding to drawn or modified strokes while keeping the remaining ones unchanged. To facilitate this process, we propose a latent distribution alignment mechanism to enhance the transition between the two regions and allow seamless blending, thereby alleviating issues of region incoherence and line discontinuity. Finally, we also build a user interface that allows the convenient creation of line art through interactive sketching and prompts. Qualitative and quantitative comparisons against existing approaches and an in-depth user study demonstrate the effectiveness and usability of our system. Our system can benefit various applications such as anime concept design, drawing assistant, and creativity support for children.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"2087-2098"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145357315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1109/TVCG.2025.3631659
Mingliang Xue, Yifan Wang, Zhi Wang, Lifeng Zhu, Lizhen Cui, Yueguo Chen, Zhiyu Ding, Oliver Deussen, Yunhai Wang
Traditional force-based graph layout models are rooted in virtual physics, while criteria-driven techniques position nodes by directly optimizing graph readability criteria. In this article, we systematically explore the integration of these two approaches, introducing criteria-driven force-based graph layout techniques. We propose a general framework that, based on user-specified readability criteria, such as minimizing edge crossings, automatically constructs a force-based model tailored to generate layouts for a given graph. Models derived from highly similar graphs can be reused to create initial layouts, users can further refine layouts by imposing different criteria on subgraphs. We perform quantitative comparisons between our layout methods and existing techniques across various graphs and present a case study on graph exploration. Our results indicate that our framework generates superior layouts compared to existing techniques and exhibits better generalization capabilities than deep learning-based methods.
{"title":"AutoFDP: Automatic Force-Based Model Selection for Multicriteria Graph Drawing.","authors":"Mingliang Xue, Yifan Wang, Zhi Wang, Lifeng Zhu, Lizhen Cui, Yueguo Chen, Zhiyu Ding, Oliver Deussen, Yunhai Wang","doi":"10.1109/TVCG.2025.3631659","DOIUrl":"10.1109/TVCG.2025.3631659","url":null,"abstract":"<p><p>Traditional force-based graph layout models are rooted in virtual physics, while criteria-driven techniques position nodes by directly optimizing graph readability criteria. In this article, we systematically explore the integration of these two approaches, introducing criteria-driven force-based graph layout techniques. We propose a general framework that, based on user-specified readability criteria, such as minimizing edge crossings, automatically constructs a force-based model tailored to generate layouts for a given graph. Models derived from highly similar graphs can be reused to create initial layouts, users can further refine layouts by imposing different criteria on subgraphs. We perform quantitative comparisons between our layout methods and existing techniques across various graphs and present a case study on graph exploration. Our results indicate that our framework generates superior layouts compared to existing techniques and exhibits better generalization capabilities than deep learning-based methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1554-1568"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145508602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}