Pub Date : 2025-12-11DOI: 10.1109/TVCG.2025.3634636
Thomas Daniel, Malgorzata Olejniczak, Julien Tierny
This application paper investigates the stability of hydrogen bonds (H-bonds), as characterized by the Quantum Theory of Atoms in Molecules (QTAIM). First, we contribute a database of 4544 electron densities associated to four isomers of water hexamers (the so-called Ring, Book, Cage and Prism), generated by distorting their equilibrium geometry under various structural perturbations, modeling the natural dynamic behavior of molecular systems. Second, we present a new stability measure, called bond occurrence rate, associating each bond path present at equilibrium with its rate of occurrence within the input ensemble. We also provide an algorithm, called BondMatcher, for its automatic computation, based on a tailored, geometry-aware partial isomorphism estimation between the extremum graphs of the considered electron densities. Our new stability measure allows for the automatic identification of densities lacking H-bond paths, enabling further visual inspections. Specifically, the topological analysis enabled by our framework corroborates experimental observations and provides refined geometrical criteria for characterizing the disappearance of H-bond paths.
{"title":"BondMatcher: H-Bond Stability Analysis in Molecular Systems.","authors":"Thomas Daniel, Malgorzata Olejniczak, Julien Tierny","doi":"10.1109/TVCG.2025.3634636","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3634636","url":null,"abstract":"<p><p>This application paper investigates the stability of hydrogen bonds (H-bonds), as characterized by the Quantum Theory of Atoms in Molecules (QTAIM). First, we contribute a database of 4544 electron densities associated to four isomers of water hexamers (the so-called Ring, Book, Cage and Prism), generated by distorting their equilibrium geometry under various structural perturbations, modeling the natural dynamic behavior of molecular systems. Second, we present a new stability measure, called bond occurrence rate, associating each bond path present at equilibrium with its rate of occurrence within the input ensemble. We also provide an algorithm, called BondMatcher, for its automatic computation, based on a tailored, geometry-aware partial isomorphism estimation between the extremum graphs of the considered electron densities. Our new stability measure allows for the automatic identification of densities lacking H-bond paths, enabling further visual inspections. Specifically, the topological analysis enabled by our framework corroborates experimental observations and provides refined geometrical criteria for characterizing the disappearance of H-bond paths.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/TVCG.2025.3634787
Chongke Bi, Xin Gao, Jiakang Deng, Guan Li, Jun Han
Large-scale scientific simulations require significant resources to generate high-resolution time-varying data (TVD). While super-resolution is an efficient post-processing strategy to reduce costs, existing methods rely on a large amount of HR training data, limiting their applicability to diverse simulation scenarios. To address this constraint, we proposed CD-TVD, a novel framework that combines contrastive learning and an improved diffusion-based super-resolution model to achieve accurate 3D super-resolution from limited time-step high-resolution data. During pre-training on historical simulation data, the contrastive encoder and diffusion superresolution modules learn degradation patterns and detailed features of high-resolution and low-resolution samples. In the training phase, the improved diffusion model with a local attention mechanism is fine-tuned using only one newly generated high-resolution timestep, leveraging the degradation knowledge learned by the encoder. This design minimizes the reliance on large-scale high-resolution datasets while maintaining the capability to recover fine-grained details. Experimental results on fluid and atmospheric simulation datasets confirm that CD-TVD delivers accurate and resource-efficient 3D super-resolution, marking a significant advancement in data augmentation for large-scale scientific simulations.
{"title":"CD-TVD: Contrastive Diffusion for 3D Super-Resolution with Scarce High-Resolution Time-Varying Data.","authors":"Chongke Bi, Xin Gao, Jiakang Deng, Guan Li, Jun Han","doi":"10.1109/TVCG.2025.3634787","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3634787","url":null,"abstract":"<p><p>Large-scale scientific simulations require significant resources to generate high-resolution time-varying data (TVD). While super-resolution is an efficient post-processing strategy to reduce costs, existing methods rely on a large amount of HR training data, limiting their applicability to diverse simulation scenarios. To address this constraint, we proposed CD-TVD, a novel framework that combines contrastive learning and an improved diffusion-based super-resolution model to achieve accurate 3D super-resolution from limited time-step high-resolution data. During pre-training on historical simulation data, the contrastive encoder and diffusion superresolution modules learn degradation patterns and detailed features of high-resolution and low-resolution samples. In the training phase, the improved diffusion model with a local attention mechanism is fine-tuned using only one newly generated high-resolution timestep, leveraging the degradation knowledge learned by the encoder. This design minimizes the reliance on large-scale high-resolution datasets while maintaining the capability to recover fine-grained details. Experimental results on fluid and atmospheric simulation datasets confirm that CD-TVD delivers accurate and resource-efficient 3D super-resolution, marking a significant advancement in data augmentation for large-scale scientific simulations.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/TVCG.2025.3642878
Aashish Panta, Alper Sahistan, Xuan Huang, Amy A Gooch, Giorgio Scorzelli, Hector Torres, Patrice Klein, Gustavo A Ovando-Montejo, Peter Lindstrom, Valerio Pascucci
The massive data generated by scientists daily serve as both a major catalyst for new discoveries and innovations, as well as a significant roadblock that restricts access to the data. Our paper introduces a new approach to removing big data barriers and democratizing access to petascale data for the broader scientific community. Our novel data fabric abstraction layer allows user-friendly querying of scientific information while hiding the complexities of dealing with file systems or cloud services. We enable FAIR (Findable, Accessible, Interoperable, and Reusable) access to datasets such as NASA's petascale climate datasets. Our paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our novel data fabric abstraction utilizes state-of-the art progressive compression algorithms and machinelearning insights to power scalable visualization dashboards for petascale data. The result provides users with the ability to identify extreme events or trends dynamically, expanding access to scientific data and further enabling discoveries. We validate our approach by improving the ability of climate scientists to visually explore their data via three fully interactive dashboards. We further validate our approach by deploying the dashboards and simplified training materials in the classroom at a minorityserving institution. These dashboards, released in simplified form to the general public, contribute significantly to a broader push to democratize the access and use of climate data.
{"title":"Expanding Access to Science Participation: A FAIR Framework for Petascale Data Visualization and Analytics.","authors":"Aashish Panta, Alper Sahistan, Xuan Huang, Amy A Gooch, Giorgio Scorzelli, Hector Torres, Patrice Klein, Gustavo A Ovando-Montejo, Peter Lindstrom, Valerio Pascucci","doi":"10.1109/TVCG.2025.3642878","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3642878","url":null,"abstract":"<p><p>The massive data generated by scientists daily serve as both a major catalyst for new discoveries and innovations, as well as a significant roadblock that restricts access to the data. Our paper introduces a new approach to removing big data barriers and democratizing access to petascale data for the broader scientific community. Our novel data fabric abstraction layer allows user-friendly querying of scientific information while hiding the complexities of dealing with file systems or cloud services. We enable FAIR (Findable, Accessible, Interoperable, and Reusable) access to datasets such as NASA's petascale climate datasets. Our paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our novel data fabric abstraction utilizes state-of-the art progressive compression algorithms and machinelearning insights to power scalable visualization dashboards for petascale data. The result provides users with the ability to identify extreme events or trends dynamically, expanding access to scientific data and further enabling discoveries. We validate our approach by improving the ability of climate scientists to visually explore their data via three fully interactive dashboards. We further validate our approach by deploying the dashboards and simplified training materials in the classroom at a minorityserving institution. These dashboards, released in simplified form to the general public, contribute significantly to a broader push to democratize the access and use of climate data.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/TVCG.2025.3634876
Y Li, S Shao, P Baudains, A I Meso, N Holliman, A Abdul-Rahman, R Borgo
Recent work suggests that shape can encode quantitative data via a mapping between value and spatial frequency (SF). However, the set-size effect when perceiving multiple SF based items remains unclear. While automatic feature extraction has been found to be less affected by set size (number of items in a group), higher-level processes for making perceptual decisions tend to require increased cognitive demand. To investigate the set-size effect on comparing integrated SF based items, we used a risk-based scenario to assess discrimination performance. Participants were asked to discriminate between pairs of maps containing multiple SF glyphs, in which each glyph represents one of four discrete levels (none, low, medium, high), forming an aggregate "risk strength" per map. The set size was also adjusted across conditions, ranging from small (3 items) to large (7 items). Discrimination sensitivity is modeled with a logistic function and response time with a mixed-effect linear model. Results show that smaller set sizes and lower overall strength enable more precise discrimination, with faster response times for larger differences between maps. Incorporating set size and overall strength into the logistic model, we found that these variables both independently and jointly influence discrimination sensitivity. We suggest these results point towards capacity-limited processes rather than purely automatic ensemble coding. Our findings highlight the importance of set size and overall signal strength when presenting multiple SF glyphs in data visualization.
{"title":"Set Size Matters: Capacity-Limited Perception of Grouped Spatial-Frequency Glyphs.","authors":"Y Li, S Shao, P Baudains, A I Meso, N Holliman, A Abdul-Rahman, R Borgo","doi":"10.1109/TVCG.2025.3634876","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3634876","url":null,"abstract":"<p><p>Recent work suggests that shape can encode quantitative data via a mapping between value and spatial frequency (SF). However, the set-size effect when perceiving multiple SF based items remains unclear. While automatic feature extraction has been found to be less affected by set size (number of items in a group), higher-level processes for making perceptual decisions tend to require increased cognitive demand. To investigate the set-size effect on comparing integrated SF based items, we used a risk-based scenario to assess discrimination performance. Participants were asked to discriminate between pairs of maps containing multiple SF glyphs, in which each glyph represents one of four discrete levels (none, low, medium, high), forming an aggregate \"risk strength\" per map. The set size was also adjusted across conditions, ranging from small (3 items) to large (7 items). Discrimination sensitivity is modeled with a logistic function and response time with a mixed-effect linear model. Results show that smaller set sizes and lower overall strength enable more precise discrimination, with faster response times for larger differences between maps. Incorporating set size and overall strength into the logistic model, we found that these variables both independently and jointly influence discrimination sensitivity. We suggest these results point towards capacity-limited processes rather than purely automatic ensemble coding. Our findings highlight the importance of set size and overall signal strength when presenting multiple SF glyphs in data visualization.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/TVCG.2025.3642740
Cheng Shang, Xingyu Chen, Liang An, Jiajun Zhang, Yuxiang Zhang, Yebin Liu, Xubo Yang
Recent research on motion generation and text-to motion synthesis focus on coarse-grained motion descriptions, neglecting fine-grained motion details and motion quality refinement. Additionally, current text-to-motion models, such as MotionGPT, lack multi-turn interaction capabilities, relying on single-turn and single-modality transformations, which limit their ability to integrate information from different modalities across interaction stages. These gaps leave critical questions, such as "Howwell is the motion performed" and "How can it be refined?" largely unaddressed. To address these issues, first, we introduce two fine-grained dance datasets-one focusing on jazz dance and the other on folk dance, which we have independently collected. Second, considering that dance motions are inherently complex and consist of long sequential actions, we introduce both global and local optimization during the motion encoding phase and employ Hidden Markov Model (HMM) temporal modeling to capture differential features between correct and incorrect movements, thereby optimizing the training process. Finally, we propose a multi-turn historical dialogue framework that enables three stages generation-motion assess, text instructions, and motion refinement-for input videos. This framework assists dance beginners by providing feedback on their movements, offering textual instructions, and delivering motion-based refinement. Experimental results on the jazz dance and folk dance datasets demonstrate that our method surpasses existing approaches in both quantitative and qualitative metrics, establishing a new benchmark for motion-text generation in the field of dance training.
{"title":"DanceAgent: Dance Movement Refinement with LLM Agent.","authors":"Cheng Shang, Xingyu Chen, Liang An, Jiajun Zhang, Yuxiang Zhang, Yebin Liu, Xubo Yang","doi":"10.1109/TVCG.2025.3642740","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3642740","url":null,"abstract":"<p><p>Recent research on motion generation and text-to motion synthesis focus on coarse-grained motion descriptions, neglecting fine-grained motion details and motion quality refinement. Additionally, current text-to-motion models, such as MotionGPT, lack multi-turn interaction capabilities, relying on single-turn and single-modality transformations, which limit their ability to integrate information from different modalities across interaction stages. These gaps leave critical questions, such as \"Howwell is the motion performed\" and \"How can it be refined?\" largely unaddressed. To address these issues, first, we introduce two fine-grained dance datasets-one focusing on jazz dance and the other on folk dance, which we have independently collected. Second, considering that dance motions are inherently complex and consist of long sequential actions, we introduce both global and local optimization during the motion encoding phase and employ Hidden Markov Model (HMM) temporal modeling to capture differential features between correct and incorrect movements, thereby optimizing the training process. Finally, we propose a multi-turn historical dialogue framework that enables three stages generation-motion assess, text instructions, and motion refinement-for input videos. This framework assists dance beginners by providing feedback on their movements, offering textual instructions, and delivering motion-based refinement. Experimental results on the jazz dance and folk dance datasets demonstrate that our method surpasses existing approaches in both quantitative and qualitative metrics, establishing a new benchmark for motion-text generation in the field of dance training.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/TVCG.2025.3634777
Douglas Markant, Subham Sah, Alireza Karduni, Milad Rogha, My Thai, Wenwen Dou
Political sectarianism is fueled in part by misperceptions of political opponents: People commonly overestimate the support for extreme policies among members of the other party. These misperceptions inflame partisan animosity and may be used to justify extremism among one's own party. Research suggests that correcting partisan misperceptions-by informing people about the actual views of outparty members-may reduce one's own expressed support for political extremism, including partisan violence and antidemocratic actions. However, there remains a limited understanding of how the design of correction interventions drives these effects. The present study investigated how correction effects depend on different representations of outparty views communicated through data visualizations. Building on prior interventions that present the average outparty view, we consider the impact of visualizations that more fully convey the range of views among outparty members. We conducted an experiment with U.S.-based participants from Prolific (N=239 Democrats, N=244 Republicans). Participants made predictions about support for political violence and undemocratic practices among members of their political outparty. They were then presented with data from an earlier survey on the actual views of outparty members. Some participants viewed only the average response (Mean-Only condition), while other groups were shown visual representations of the range of views from 75% of the outparty (Mean+Interval condition) or the full distribution of responses (Mean+Points condition). Compared to a control group that was not informed about outparty views, we observed the strongest correction effects (i.e., lower support for political violence and undemocratic practices) among participants in the Mean-only and Mean+Points condition, while correction effects were weaker in the Mean+Interval condition. In addition, participants who observed the full distribution of out-party views (Mean+Points condition) were most accurate at later recalling the degree of support among the outparty. Our findings suggest that data visualizations can be an important tool for correcting pervasive distortions in beliefs about other groups. However, the way in which variability in outparty views is visualized can significantly shape how people interpret and respond to corrective information.
{"title":"Correcting Misperceptions at a Glance: Using Data Visualizations to Reduce Political Sectarianism.","authors":"Douglas Markant, Subham Sah, Alireza Karduni, Milad Rogha, My Thai, Wenwen Dou","doi":"10.1109/TVCG.2025.3634777","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3634777","url":null,"abstract":"<p><p>Political sectarianism is fueled in part by misperceptions of political opponents: People commonly overestimate the support for extreme policies among members of the other party. These misperceptions inflame partisan animosity and may be used to justify extremism among one's own party. Research suggests that correcting partisan misperceptions-by informing people about the actual views of outparty members-may reduce one's own expressed support for political extremism, including partisan violence and antidemocratic actions. However, there remains a limited understanding of how the design of correction interventions drives these effects. The present study investigated how correction effects depend on different representations of outparty views communicated through data visualizations. Building on prior interventions that present the average outparty view, we consider the impact of visualizations that more fully convey the range of views among outparty members. We conducted an experiment with U.S.-based participants from Prolific (N=239 Democrats, N=244 Republicans). Participants made predictions about support for political violence and undemocratic practices among members of their political outparty. They were then presented with data from an earlier survey on the actual views of outparty members. Some participants viewed only the average response (Mean-Only condition), while other groups were shown visual representations of the range of views from 75% of the outparty (Mean+Interval condition) or the full distribution of responses (Mean+Points condition). Compared to a control group that was not informed about outparty views, we observed the strongest correction effects (i.e., lower support for political violence and undemocratic practices) among participants in the Mean-only and Mean+Points condition, while correction effects were weaker in the Mean+Interval condition. In addition, participants who observed the full distribution of out-party views (Mean+Points condition) were most accurate at later recalling the degree of support among the outparty. Our findings suggest that data visualizations can be an important tool for correcting pervasive distortions in beliefs about other groups. However, the way in which variability in outparty views is visualized can significantly shape how people interpret and respond to corrective information.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/TVCG.2025.3642044
Peiyu Zhang, Mohamad El Iskandarani, Sara Riggs
Augmented reality (AR) may provide supplementary information to support tasks in the physical world, offering the advantage of displaying multiple windows with high flexibility in interface layout. In AR-physical world mixed scenarios, users often need to locate and retrieve target information from virtual multi-window displays. Understanding how to design effective layouts for these interfaces is critical to enhancing visual search performance, a key element of information retrieval. This study examines the effects of depth separation, information density and curvature of virtual multi-window displays on a conjunctive visual search task in AR. Results indicate that reducing information density and introducing curvature significantly reduced both search time and the time taken to decide that the target was absent (task quit time). Although depth separation did not significantly affect search time, it notably reduced quit time. The number of errors was not significantly influenced by any of the factors. Additionally, users preferred a curved display with lower information density that remained within the device's field of view, and their search time was fastest with this layout. Finally, we noticed variations in layout preference and performance changes among individuals, possibly influenced by differences in search strategies.
{"title":"The Effect of Layout on Visual Search in Augmented Reality Multi-Window Displays.","authors":"Peiyu Zhang, Mohamad El Iskandarani, Sara Riggs","doi":"10.1109/TVCG.2025.3642044","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3642044","url":null,"abstract":"<p><p>Augmented reality (AR) may provide supplementary information to support tasks in the physical world, offering the advantage of displaying multiple windows with high flexibility in interface layout. In AR-physical world mixed scenarios, users often need to locate and retrieve target information from virtual multi-window displays. Understanding how to design effective layouts for these interfaces is critical to enhancing visual search performance, a key element of information retrieval. This study examines the effects of depth separation, information density and curvature of virtual multi-window displays on a conjunctive visual search task in AR. Results indicate that reducing information density and introducing curvature significantly reduced both search time and the time taken to decide that the target was absent (task quit time). Although depth separation did not significantly affect search time, it notably reduced quit time. The number of errors was not significantly influenced by any of the factors. Additionally, users preferred a curved display with lower information density that remained within the device's field of view, and their search time was fastest with this layout. Finally, we noticed variations in layout preference and performance changes among individuals, possibly influenced by differences in search strategies.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1109/TVCG.2025.3634794
Simon Warchol, Grace Guo, Johannes Knittel, Dan Freeman, Usha Bhalla, Jeremy L Muhlich, Peter K Sorger, Hanspeter Pfister
Dimensionality reduction techniques help analysts make sense of complex, high-dimensional spatial datasets, such as multiplexed tissue imaging, satellite imagery, and astronomical observations, by projecting data attributes into a two-dimensional space. However, these techniques typically abstract away crucial spatial, positional, and morphological contexts, complicating interpretation and limiting insights. To address these limitations, we present SEAL, an interactive visual analytics system designed to bridge the gap between abstract 2D embeddings and their rich spatial imaging context. SEAL introduces a novel hybrid-embedding visualization that preserves image and morphological information while integrating critical high-dimensional feature data. By adapting set visualization methods, SEAL allows analysts to identify, visualize, and compare selections-defined manually or algorithmically-in both the embedding and original spatial views, facilitating a deeper understanding of the spatial arrangement and morphological characteristics of entities of interest. To elucidate differences between selected sets of items, SEAL employs a scalable surrogate model to calculate feature importance scores, identifying the most influential features governing the position of objects within embeddings. These importance scores are visually summarized across selections, with mathematical set operations enabling detailed comparative analyses. We demonstrate SEAL's effectiveness and versatility through three case studies: colorectal cancer tissue analysis with a pharmacologist, melanoma investigation with a cell biologist, and exploration of sky survey data with an astronomer. These studies underscore the importance of integrating image context into embedding spaces when interpreting complex imaging datasets. Implemented as a standalone tool while also integrating seamlessly with computational notebooks, SEAL provides an interactive platform for spatially informed exploration of high-dimensional datasets, significantly enhancing interpretability and insight generation.
{"title":"Spatially-resolved Embedding Analysis with Linked Imaging Data.","authors":"Simon Warchol, Grace Guo, Johannes Knittel, Dan Freeman, Usha Bhalla, Jeremy L Muhlich, Peter K Sorger, Hanspeter Pfister","doi":"10.1109/TVCG.2025.3634794","DOIUrl":"10.1109/TVCG.2025.3634794","url":null,"abstract":"<p><p>Dimensionality reduction techniques help analysts make sense of complex, high-dimensional spatial datasets, such as multiplexed tissue imaging, satellite imagery, and astronomical observations, by projecting data attributes into a two-dimensional space. However, these techniques typically abstract away crucial spatial, positional, and morphological contexts, complicating interpretation and limiting insights. To address these limitations, we present SEAL, an interactive visual analytics system designed to bridge the gap between abstract 2D embeddings and their rich spatial imaging context. SEAL introduces a novel hybrid-embedding visualization that preserves image and morphological information while integrating critical high-dimensional feature data. By adapting set visualization methods, SEAL allows analysts to identify, visualize, and compare selections-defined manually or algorithmically-in both the embedding and original spatial views, facilitating a deeper understanding of the spatial arrangement and morphological characteristics of entities of interest. To elucidate differences between selected sets of items, SEAL employs a scalable surrogate model to calculate feature importance scores, identifying the most influential features governing the position of objects within embeddings. These importance scores are visually summarized across selections, with mathematical set operations enabling detailed comparative analyses. We demonstrate SEAL's effectiveness and versatility through three case studies: colorectal cancer tissue analysis with a pharmacologist, melanoma investigation with a cell biologist, and exploration of sky survey data with an astronomer. These studies underscore the importance of integrating image context into embedding spaces when interpreting complex imaging datasets. Implemented as a standalone tool while also integrating seamlessly with computational notebooks, SEAL provides an interactive platform for spatially informed exploration of high-dimensional datasets, significantly enhancing interpretability and insight generation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emergence of quantum computers heralds a new frontier in computational power, empowering quantum algorithms to address challenges that defy classical computation. However, the design of quantum algorithms is challenging as it largely requires the manual efforts of quantum experts to transit mathematical expressions to quantum circuit diagrams. To ease this process, particularly for prototyping, educational, and modular design workflows, we propose to bridge the textual and visual contexts between mathematics and quantum circuits through visual linking and transitions. We contribute a design space for quantum algorithm design, focusing on the textual and visual elements, interactions, and design patterns throughout the quantum algorithm design process. Informed by the design space, we introduce QuRAFT, a visual interface that facilitates a seamless transition from abstract mathematical expressions to concrete quantum circuits. QuRAFT incorporates a suite of eight integrated visual and interaction designs tailored to support users in the formulation, implementation, and validation process of the quantum algorithm design. Through two detailed case studies and a user evaluation, this paper demonstrates the effectiveness of QuRAFT. Feedback from quantum computing experts highlights the practical utility of QuRAFT in algorithm design and provides valuable implications for future advancements in visualization and interaction design within the quantum computing domain.
{"title":"QuRAFT: Enhancing Quantum Algorithm Design by Visual Linking between Mathematical Concepts and Quantum Circuits.","authors":"Zhen Wen, Jieyi Chen, Yao Lu, Siwei Tan, Jianwei Yin, Minfeng Zhu, Wei Chen","doi":"10.1109/TVCG.2025.3642559","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3642559","url":null,"abstract":"<p><p>The emergence of quantum computers heralds a new frontier in computational power, empowering quantum algorithms to address challenges that defy classical computation. However, the design of quantum algorithms is challenging as it largely requires the manual efforts of quantum experts to transit mathematical expressions to quantum circuit diagrams. To ease this process, particularly for prototyping, educational, and modular design workflows, we propose to bridge the textual and visual contexts between mathematics and quantum circuits through visual linking and transitions. We contribute a design space for quantum algorithm design, focusing on the textual and visual elements, interactions, and design patterns throughout the quantum algorithm design process. Informed by the design space, we introduce QuRAFT, a visual interface that facilitates a seamless transition from abstract mathematical expressions to concrete quantum circuits. QuRAFT incorporates a suite of eight integrated visual and interaction designs tailored to support users in the formulation, implementation, and validation process of the quantum algorithm design. Through two detailed case studies and a user evaluation, this paper demonstrates the effectiveness of QuRAFT. Feedback from quantum computing experts highlights the practical utility of QuRAFT in algorithm design and provides valuable implications for future advancements in visualization and interaction design within the quantum computing domain.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1109/TVCG.2025.3634820
Shaolun Ruan, Rui Sheng, Xiaolin Wen, Jiachen Wang, Tianyi Zhang, Yong Wang, Tim Dwyer, Jiannan Li
Design studies aim to develop visualization solutions for real-world problems across various application domains. Recently, the emergence of large language models (LLMs) has introduced new opportunities to enhance the design study process, providing capabilities such as creative problem-solving, data handling, and insightful analysis. However, despite their growing popularity, there remains a lack of systematic understanding of how LLMs can effectively assist researchers in visualization-specific design studies. In this paper, we conducted a multi-stage qualitative study to fill this gap, which involved 30 design study researchers from diverse backgrounds and expertise levels. Through in-depth interviews and carefully-designed questionnaires, we investigated strategies for utilizing LLMs, the challenges encountered, and the practices used to overcome them. We further compiled the roles that LLMs can play across different stages of the design study process. Our findings highlight practical implications to inform visualization practitioners, and also provide a framework for leveraging LLMs to facilitate the design study process in visualization research.
{"title":"Qualitative Study for LLM-assisted Design Study Process: Strategies, Challenges, and Roles.","authors":"Shaolun Ruan, Rui Sheng, Xiaolin Wen, Jiachen Wang, Tianyi Zhang, Yong Wang, Tim Dwyer, Jiannan Li","doi":"10.1109/TVCG.2025.3634820","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3634820","url":null,"abstract":"<p><p>Design studies aim to develop visualization solutions for real-world problems across various application domains. Recently, the emergence of large language models (LLMs) has introduced new opportunities to enhance the design study process, providing capabilities such as creative problem-solving, data handling, and insightful analysis. However, despite their growing popularity, there remains a lack of systematic understanding of how LLMs can effectively assist researchers in visualization-specific design studies. In this paper, we conducted a multi-stage qualitative study to fill this gap, which involved 30 design study researchers from diverse backgrounds and expertise levels. Through in-depth interviews and carefully-designed questionnaires, we investigated strategies for utilizing LLMs, the challenges encountered, and the practices used to overcome them. We further compiled the roles that LLMs can play across different stages of the design study process. Our findings highlight practical implications to inform visualization practitioners, and also provide a framework for leveraging LLMs to facilitate the design study process in visualization research.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}