首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
GNN101: Visual Learning of Graph Neural Networks in Your Web Browser. GNN101:在Web浏览器中可视化学习图神经网络。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3634087
Yilin Lu, Chongwei Chen, Yuxin Chen, Kexin Huang, Marinka Zitnik, Qianwen Wang

Graph Neural Networks (GNNs) have achieved significant success across various applications. However, their complex structures and inner workings can be challenging for non-AI experts to understand. To address this issue, this study presents GNN101, an educational visualization tool for interactive learning of GNNs. GNN101 introduces a set of animated visualizations that seamlessly integrates mathematical formulas with visualizations via multiple levels of abstraction, including a model overview, layer operations, and detailed calculations. Users can easily switch between two complementary views: a node-link view that offers an intuitive understanding of the graph data, and a matrix view that provides a space-efficient and comprehensive overview of all features and their transformations across layers. GNN101 was designed and developed based on close collaboration with four GNN experts and deployment in three GNN-related courses. We demonstrated the usability and effectiveness of GNN101 via use cases and user studies with both GNN teaching assistants and students. To ensure broad educational access, GNN101 is developed through modern web technologies and available directly in web browsers without requiring any installations.

图神经网络(gnn)在各种应用中取得了显著的成功。然而,它们复杂的结构和内部运作对于非人工智能专家来说是具有挑战性的。为了解决这个问题,本研究提出了GNN101,一个用于gnn交互学习的教育可视化工具。GNN101引入了一组动画可视化,通过多个抽象级别将数学公式与可视化无缝集成,包括模型概述,层操作和详细计算。用户可以很容易地在两个互补视图之间切换:一个是节点链接视图,它提供了对图形数据的直观理解,另一个是矩阵视图,它提供了对所有特征及其跨层转换的空间高效和全面的概述。GNN101是在与四位GNN专家密切合作的基础上设计和开发的,并在三门GNN相关课程中进行了部署。我们通过GNN助教和学生的用例和用户研究展示了GNN101的可用性和有效性。为了确保广泛的教育访问,GNN101是通过现代网络技术开发的,无需任何安装即可直接在网络浏览器中使用。
{"title":"GNN101: Visual Learning of Graph Neural Networks in Your Web Browser.","authors":"Yilin Lu, Chongwei Chen, Yuxin Chen, Kexin Huang, Marinka Zitnik, Qianwen Wang","doi":"10.1109/TVCG.2025.3634087","DOIUrl":"10.1109/TVCG.2025.3634087","url":null,"abstract":"<p><p>Graph Neural Networks (GNNs) have achieved significant success across various applications. However, their complex structures and inner workings can be challenging for non-AI experts to understand. To address this issue, this study presents GNN101, an educational visualization tool for interactive learning of GNNs. GNN101 introduces a set of animated visualizations that seamlessly integrates mathematical formulas with visualizations via multiple levels of abstraction, including a model overview, layer operations, and detailed calculations. Users can easily switch between two complementary views: a node-link view that offers an intuitive understanding of the graph data, and a matrix view that provides a space-efficient and comprehensive overview of all features and their transformations across layers. GNN101 was designed and developed based on close collaboration with four GNN experts and deployment in three GNN-related courses. We demonstrated the usability and effectiveness of GNN101 via use cases and user studies with both GNN teaching assistants and students. To ensure broad educational access, GNN101 is developed through modern web technologies and available directly in web browsers without requiring any installations.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1793-1805"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145574537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Based Whole Sky Cloud Volume Generation. 基于图像的全天空云体生成。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3641982
Pinar Satilmis, Kurt Debattista, Thomas Bashford-Rogers

Accurate illumination is crucial for many imaging and vision applications, and skies are the dominant source of lighting in many scenes. Most existing work for representing sky illumination has focused on clear skies or more recently generative approaches for synthesizing clouds. However, these are very limited in that they assume distant illumination and do not capture the 3D properties of clouds. This paper presents a novel and principled approach to extract 3D whole-sky volumetric representations of clouds which can be used for imaging applications. Our approach extracts clouds from a single fisheye capture of the sky via an iterative optimization process. We achieve this by exploiting the physical properties of light scattering in clouds and use these to drive a domain-specific light transport simulation algorithm to render the images required for optimization. Results for this method provide high accuracy when re-rendering with our reconstructed clouds compared to real captures, and also enable novel uses of environment maps such as inclusion of captured clouds in renderings, cloud shadows, and more accurate aerial perspective and lighting.

精确的照明对于许多成像和视觉应用至关重要,天空是许多场景中主要的照明来源。大多数现有的表示天空照明的工作都集中在晴朗的天空或最近合成云的生成方法上。然而,这些都是非常有限的,因为它们假设远处的照明,并没有捕捉到云的3D属性。本文提出了一种新颖而有原则的方法来提取可用于成像应用的云的三维全天空体积表示。我们的方法通过迭代优化过程从单个鱼眼天空捕获中提取云。我们通过利用云中光散射的物理特性来实现这一点,并使用这些特性来驱动特定领域的光传输模拟算法来渲染优化所需的图像。与真实捕获相比,该方法的结果在重新渲染我们的重建云时提供了更高的精度,并且还支持环境地图的新用途,例如在渲染图中包含捕获的云,云阴影,以及更准确的空中透视和照明。
{"title":"Image Based Whole Sky Cloud Volume Generation.","authors":"Pinar Satilmis, Kurt Debattista, Thomas Bashford-Rogers","doi":"10.1109/TVCG.2025.3641982","DOIUrl":"10.1109/TVCG.2025.3641982","url":null,"abstract":"<p><p>Accurate illumination is crucial for many imaging and vision applications, and skies are the dominant source of lighting in many scenes. Most existing work for representing sky illumination has focused on clear skies or more recently generative approaches for synthesizing clouds. However, these are very limited in that they assume distant illumination and do not capture the 3D properties of clouds. This paper presents a novel and principled approach to extract 3D whole-sky volumetric representations of clouds which can be used for imaging applications. Our approach extracts clouds from a single fisheye capture of the sky via an iterative optimization process. We achieve this by exploiting the physical properties of light scattering in clouds and use these to drive a domain-specific light transport simulation algorithm to render the images required for optimization. Results for this method provide high accuracy when re-rendering with our reconstructed clouds compared to real captures, and also enable novel uses of environment maps such as inclusion of captured clouds in renderings, cloud shadows, and more accurate aerial perspective and lighting.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1743-1753"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning-Based Recommendations for Efficient Urban Visual Query. 基于学习的高效城市视觉查询建议。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3625071
Ziliang Wu, Wei Chen, Xiangyang Wu, Zihan Zhou, Yingchaojie Feng, Junhua Lu, Zhiguang Zhou, Mingliang Xu

Urban visual querying leverages visual representations and interactions to depict the domain of interest and express related requests for exploring complex datasets, which is usually an iterative process. One main challenge of this process is the vast search space in terms of identifying querying conditions, observing querying results, and making the subsequent queries. This paper proposes a novel acceleration scheme that intelligently recommends a small set of querying results subject to previous queries. Central to our approach is a reinforcement learning based approach that trains a recommendation agent by simulating user behavior and characterizing the search space. We propose a mixed-initiative urban visual query scheme to enhance the exploration process additionally. We evaluate our approach by performing qualitative and quantitative experiments on a real-world scenario. The experimental results demonstrate the capability of reducing user workload, achieving optimized querying, and improving analysis efficiency.

城市视觉查询利用视觉表示和交互来描述感兴趣的领域,并表达探索复杂数据集的相关请求,这通常是一个迭代过程。这个过程的一个主要挑战是,在确定查询条件、观察查询结果和进行后续查询方面,搜索空间很大。本文提出了一种新的加速方案,该方案可以根据先前的查询,智能地推荐一小组查询结果。我们方法的核心是一种基于强化学习的方法,通过模拟用户行为和描述搜索空间来训练推荐代理。我们提出了一个混合的城市视觉查询方案,以增强探索过程。我们通过在真实世界的场景中进行定性和定量实验来评估我们的方法。实验结果表明,该方法能够减少用户工作量,实现查询优化,提高分析效率。
{"title":"Learning-Based Recommendations for Efficient Urban Visual Query.","authors":"Ziliang Wu, Wei Chen, Xiangyang Wu, Zihan Zhou, Yingchaojie Feng, Junhua Lu, Zhiguang Zhou, Mingliang Xu","doi":"10.1109/TVCG.2025.3625071","DOIUrl":"10.1109/TVCG.2025.3625071","url":null,"abstract":"<p><p>Urban visual querying leverages visual representations and interactions to depict the domain of interest and express related requests for exploring complex datasets, which is usually an iterative process. One main challenge of this process is the vast search space in terms of identifying querying conditions, observing querying results, and making the subsequent queries. This paper proposes a novel acceleration scheme that intelligently recommends a small set of querying results subject to previous queries. Central to our approach is a reinforcement learning based approach that trains a recommendation agent by simulating user behavior and characterizing the search space. We propose a mixed-initiative urban visual query scheme to enhance the exploration process additionally. We evaluate our approach by performing qualitative and quantitative experiments on a real-world scenario. The experimental results demonstrate the capability of reducing user workload, achieving optimized querying, and improving analysis efficiency.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1963-1977"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145357316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OM4AnI: A Novel Overlap Measure for Anomaly Identification in Multi-Class Scatterplots. 一种新的多类散点图异常识别重叠度量。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3642219
Liqun Liu, Leonid Bogachev, Mahdi Rezaei, Nishant Ravikumar, Arjun Khara, Mohsen Azarmi, Roy A Ruddle

Scatterplots are widely used across various domains to identify anomalies in datasets, particularly in multi-class settings, such as detecting misclassified or mislabeled data. However, scatterplot effectiveness often declines with large datasets due to limited display resolution. This paper introduces a novel Visual Quality Measure (VQM) - OM4AnI (Overlap Measure for Anomaly Identification) - which quantifies the degree of overlap for identifying anomalies, helping users estimate how effectively anomalies can be observed in multi-class scatterplots. OM4AnI begins by computing anomaly index based on each data point's position relative to its class cluster. The scatterplot is then discretized into a matrix representation by binning the display space into cell-level (pixel-level) grids and computing the coverage for each pixel. It takes into account the anomaly index of data points covering these pixels and visual features (marker shapes, marker sizes, and rendering orders). Building on this foundation, we sum all the coverage information in each cell (pixel) of matrix representation to obtain the final quality score with respect to anomaly identification. We conducted an evaluation to analyze the efficiency, effectiveness, sensitivity of OM4AnI in comparison with six representative baseline methods that are based on different computation granularity levels: data level, marker level, and pixel level. The results show that OM4AnI outperforms baseline methods by exhibiting more monotonic trends against the ground truth and greater sensitivity to rendering order, unlike the baseline methods. It confirms that OM4AnI can inform users about how effectively their scatterplots support anomaly identification. Overall, OM4AnI shows strong potential as an evaluation metric and for optimizing scatterplots through automatic adjustment of visual parameters.

散点图被广泛应用于各个领域,以识别数据集中的异常,特别是在多类设置中,例如检测错误分类或错误标记的数据。然而,由于有限的显示分辨率,散点图的有效性往往在大数据集上下降。本文介绍了一种新的视觉质量度量(VQM)——OM4AnI (Overlap Measure for Anomaly Identification),它量化了识别异常的重叠程度,帮助用户估计在多类散点图中如何有效地观察到异常。OM4AnI首先根据每个数据点相对于其类簇的位置计算异常指数。然后,通过将显示空间划分为单元级(像素级)网格并计算每个像素的覆盖率,将散点图离散为矩阵表示。它考虑了覆盖这些像素和视觉特征(标记形状、标记大小和呈现顺序)的数据点的异常指数。在此基础上,对矩阵表示的每个单元(像素)的覆盖信息进行求和,得到最终的异常识别质量分数。我们对OM4AnI的效率、有效性和灵敏度进行了评估,并与基于不同计算粒度级别(数据级、标记级和像素级)的六种代表性基线方法进行了比较。结果表明,与基线方法不同,OM4AnI表现出更多的单调趋势,对呈现顺序更敏感,从而优于基线方法。它证实了OM4AnI可以告知用户他们的散点图如何有效地支持异常识别。总体而言,OM4AnI显示出强大的潜力,可以作为评估指标,并通过自动调整视觉参数来优化散点图。
{"title":"OM4AnI: A Novel Overlap Measure for Anomaly Identification in Multi-Class Scatterplots.","authors":"Liqun Liu, Leonid Bogachev, Mahdi Rezaei, Nishant Ravikumar, Arjun Khara, Mohsen Azarmi, Roy A Ruddle","doi":"10.1109/TVCG.2025.3642219","DOIUrl":"10.1109/TVCG.2025.3642219","url":null,"abstract":"<p><p>Scatterplots are widely used across various domains to identify anomalies in datasets, particularly in multi-class settings, such as detecting misclassified or mislabeled data. However, scatterplot effectiveness often declines with large datasets due to limited display resolution. This paper introduces a novel Visual Quality Measure (VQM) - OM4AnI (Overlap Measure for Anomaly Identification) - which quantifies the degree of overlap for identifying anomalies, helping users estimate how effectively anomalies can be observed in multi-class scatterplots. OM4AnI begins by computing anomaly index based on each data point's position relative to its class cluster. The scatterplot is then discretized into a matrix representation by binning the display space into cell-level (pixel-level) grids and computing the coverage for each pixel. It takes into account the anomaly index of data points covering these pixels and visual features (marker shapes, marker sizes, and rendering orders). Building on this foundation, we sum all the coverage information in each cell (pixel) of matrix representation to obtain the final quality score with respect to anomaly identification. We conducted an evaluation to analyze the efficiency, effectiveness, sensitivity of OM4AnI in comparison with six representative baseline methods that are based on different computation granularity levels: data level, marker level, and pixel level. The results show that OM4AnI outperforms baseline methods by exhibiting more monotonic trends against the ground truth and greater sensitivity to rendering order, unlike the baseline methods. It confirms that OM4AnI can inform users about how effectively their scatterplots support anomaly identification. Overall, OM4AnI shows strong potential as an evaluation metric and for optimizing scatterplots through automatic adjustment of visual parameters.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1850-1863"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward More Explainable Nonlinear Dimensionality Reduction: A Feature-Driven Interaction Approach. 迈向更可解释的非线性降维:一个特征驱动的交互方法。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3622114
Aeri Cho, Hyeon Jeon, Kiroong Choe, Seokhyeon Park, Jinwook Seo

Nonlinear dimensionality reduction (NDR) techniques are widely used to visualize high-dimensional data. However, they often lack explainability, making it challenging for analysts to relate patterns in projections to original high-dimensional features. Existing interactive methods typically separate user interactions from the feature space, treating them primarily as post-hoc explanations rather than integrating them into the exploration process. This separation limits insight generation by restricting users' understanding of how features dynamically influence projections. To address this limitation, we propose a bidirectional interaction method that directly bridges the feature space and the projections. By allowing users to adjust feature weights, our approach enables intuitive exploration of how different features shape the embedding. We also define visual semantics to quantify projection changes, enabling structured pattern discovery through automated query-based interaction. To ensure responsiveness despite the computational complexity of NDR, we employ a neural network to approximate the projection process, enhancing scalability while maintaining accuracy. We evaluated our approach through quantitative analysis, assessing accuracy and scalability. A user study with a comprehensive visual interface and case studies demonstrated its effectiveness in supporting hypothesis generation and exploratory tasks with real-world data. The results confirmed that our approach supports diverse analytical scenarios and enhances users' ability to explore and interpret high-dimensional data through interactive exploration grounded in the feature space.

非线性降维技术被广泛用于高维数据的可视化。然而,它们通常缺乏可解释性,这使得分析人员很难将预测中的模式与原始高维特征联系起来。现有的交互方法通常将用户交互与特征空间分离,主要将其视为事后解释,而不是将其集成到探索过程中。这种分离通过限制用户对特征如何动态影响预测的理解,限制了洞察力的生成。为了解决这一限制,我们提出了一种双向交互方法,直接连接特征空间和投影。通过允许用户调整特征权重,我们的方法可以直观地探索不同的特征如何塑造嵌入。我们还定义了视觉语义来量化投影变化,通过基于查询的自动交互实现结构化模式发现。尽管NDR计算复杂,但为了确保响应性,我们使用神经网络来近似投影过程,在保持准确性的同时增强可扩展性。我们通过定量分析评估了我们的方法,评估了准确性和可扩展性。用户研究与一个全面的可视化界面和案例研究证明了它的有效性,支持假设生成和探索性任务与现实世界的数据。结果证实,我们的方法支持多种分析场景,并通过基于特征空间的交互式探索增强了用户探索和解释高维数据的能力。
{"title":"Toward More Explainable Nonlinear Dimensionality Reduction: A Feature-Driven Interaction Approach.","authors":"Aeri Cho, Hyeon Jeon, Kiroong Choe, Seokhyeon Park, Jinwook Seo","doi":"10.1109/TVCG.2025.3622114","DOIUrl":"10.1109/TVCG.2025.3622114","url":null,"abstract":"<p><p>Nonlinear dimensionality reduction (NDR) techniques are widely used to visualize high-dimensional data. However, they often lack explainability, making it challenging for analysts to relate patterns in projections to original high-dimensional features. Existing interactive methods typically separate user interactions from the feature space, treating them primarily as post-hoc explanations rather than integrating them into the exploration process. This separation limits insight generation by restricting users' understanding of how features dynamically influence projections. To address this limitation, we propose a bidirectional interaction method that directly bridges the feature space and the projections. By allowing users to adjust feature weights, our approach enables intuitive exploration of how different features shape the embedding. We also define visual semantics to quantify projection changes, enabling structured pattern discovery through automated query-based interaction. To ensure responsiveness despite the computational complexity of NDR, we employ a neural network to approximate the projection process, enhancing scalability while maintaining accuracy. We evaluated our approach through quantitative analysis, assessing accuracy and scalability. A user study with a comprehensive visual interface and case studies demonstrated its effectiveness in supporting hypothesis generation and exploratory tasks with real-world data. The results confirmed that our approach supports diverse analytical scenarios and enhances users' ability to explore and interpret high-dimensional data through interactive exploration grounded in the feature space.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1835-1849"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145310353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward More Intuitive VR Locomotion Techniques: How Locomotion Metaphors Shape Users' Mental Models. 走向更直观的VR运动技术:运动隐喻如何塑造用户的心理模型。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3630826
Lisa Marie Prinz, Tintu Mathew, Benjamin Weyers

Interface metaphors are thought to enable an intuitive and effective interaction with a user interface by allowing users to draw on existing knowledge and reducing the need for instructions. This makes metaphors a promising candidate to drive the development of locomotion interfaces for virtual reality (VR). Since creating metaphoric interfaces can be difficult, it is important to analyze how typical locomotion metaphors can support an intuitive interaction. We performed a qualitative online study to observe the effect of typical metaphors (Walking, Steering, Flying, and Teleportation) and the influence of perceived affordances and user background. Our analysis shows that users adapt the interface expectations induced by metaphors to fit the perceived affordances instead of changing them. Several interests, the age, education, and gender influenced the expectations regarding the VR locomotion interface. Our findings contribute to a better understanding of users' mental models of VR locomotion metaphors, which seem necessary for designing more intuitive locomotion.

界面隐喻被认为通过允许用户利用现有知识和减少对说明的需要,使与用户界面的直观和有效的交互成为可能。这使得隐喻成为推动虚拟现实(VR)运动接口发展的有希望的候选人。由于创建隐喻界面可能很困难,因此分析典型的运动隐喻如何支持直观交互是很重要的。我们进行了一项定性在线研究,观察典型隐喻(行走、操纵、飞行和瞬间移动)的效果以及感知的启示和用户背景的影响。我们的分析表明,用户会调整由隐喻引起的界面期望,以适应感知到的功能,而不是改变它们。兴趣、年龄、教育程度和性别等因素影响着人们对VR运动界面的期望。我们的发现有助于更好地理解用户对VR运动隐喻的心理模型,这对于设计更直观的运动似乎是必要的。
{"title":"Toward More Intuitive VR Locomotion Techniques: How Locomotion Metaphors Shape Users' Mental Models.","authors":"Lisa Marie Prinz, Tintu Mathew, Benjamin Weyers","doi":"10.1109/TVCG.2025.3630826","DOIUrl":"10.1109/TVCG.2025.3630826","url":null,"abstract":"<p><p>Interface metaphors are thought to enable an intuitive and effective interaction with a user interface by allowing users to draw on existing knowledge and reducing the need for instructions. This makes metaphors a promising candidate to drive the development of locomotion interfaces for virtual reality (VR). Since creating metaphoric interfaces can be difficult, it is important to analyze how typical locomotion metaphors can support an intuitive interaction. We performed a qualitative online study to observe the effect of typical metaphors (Walking, Steering, Flying, and Teleportation) and the influence of perceived affordances and user background. Our analysis shows that users adapt the interface expectations induced by metaphors to fit the perceived affordances instead of changing them. Several interests, the age, education, and gender influenced the expectations regarding the VR locomotion interface. Our findings contribute to a better understanding of users' mental models of VR locomotion metaphors, which seem necessary for designing more intuitive locomotion.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1605-1621"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145508585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variational Mesh Offsetting by Smoothed Winding Number. 变分网格平滑圈数偏移。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3637845
Haoran Sun, Shuang Wu, Hujun Bao, Jin Huang

Surface mesh offsetting is a fundamental operation in various applications (e.g., shape modeling). Implicit methods that contour a volumetric distance field are robust at handling intersection defects, but it is challenging to apply shape control (e.g., preserving sharp features in the input shape) and to avoid undesired topology changes. Explicit methods, which move vertices towards the offset surface (with possible adaptivity), can address the above issues, but it is hard to avoid intersection issues. To combine the advantages of both, we propose a variational framework that takes mesh vertex locations as variables while simultaneously involving a smooth winding-number field associated with the mesh. Under various shape regularizations (e.g., sharp feature preservation) formulated on the mesh, the objective function mainly requires that the input mesh lie on the offset contour of the field induced by the resulting mesh. Such a combination inherits the ability to apply flexible shape regularizations from explicit methods and significantly alleviates intersection issues because of the field. Moreover, the optimization problem is numerically friendly by virtue of the differentiability of the field w.r.t. the mesh vertices. Results show that we can offset a mesh while preserving sharp features of the original surface, restricting selected parts to quadric surfaces and penalizing intersections.

表面网格偏移是各种应用(例如形状建模)中的基本操作。对体积距离场进行轮廓化的隐式方法在处理相交缺陷方面具有鲁棒性,但在应用形状控制(例如在输入形状中保留尖锐特征)和避免不希望的拓扑变化方面具有挑战性。显式方法将顶点移动到偏移曲面(可能具有自适应性),可以解决上述问题,但很难避免相交问题。为了结合两者的优点,我们提出了一个变分框架,该框架将网格顶点位置作为变量,同时涉及与网格相关的光滑缠绕数域。在网格上制定的各种形状正则化(如尖锐特征保留)下,目标函数主要要求输入网格位于由所得网格诱导的场的偏移轮廓上。这样的组合继承了从显式方法中应用灵活形状正则化的能力,并显着缓解了由于字段而产生的交叉问题。此外,该优化问题在数值上是友好的,因为场相对于网格顶点是可微的。结果表明,该方法可以在保留原始曲面尖锐特征的情况下进行网格偏移,将所选部件限制在二次曲面上,并对相交进行惩罚。
{"title":"Variational Mesh Offsetting by Smoothed Winding Number.","authors":"Haoran Sun, Shuang Wu, Hujun Bao, Jin Huang","doi":"10.1109/TVCG.2025.3637845","DOIUrl":"10.1109/TVCG.2025.3637845","url":null,"abstract":"<p><p>Surface mesh offsetting is a fundamental operation in various applications (e.g., shape modeling). Implicit methods that contour a volumetric distance field are robust at handling intersection defects, but it is challenging to apply shape control (e.g., preserving sharp features in the input shape) and to avoid undesired topology changes. Explicit methods, which move vertices towards the offset surface (with possible adaptivity), can address the above issues, but it is hard to avoid intersection issues. To combine the advantages of both, we propose a variational framework that takes mesh vertex locations as variables while simultaneously involving a smooth winding-number field associated with the mesh. Under various shape regularizations (e.g., sharp feature preservation) formulated on the mesh, the objective function mainly requires that the input mesh lie on the offset contour of the field induced by the resulting mesh. Such a combination inherits the ability to apply flexible shape regularizations from explicit methods and significantly alleviates intersection issues because of the field. Moreover, the optimization problem is numerically friendly by virtue of the differentiability of the field w.r.t. the mesh vertices. Results show that we can offset a mesh while preserving sharp features of the original surface, restricting selected parts to quadric surfaces and penalizing intersections.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1668-1681"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145644100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning a Domain-Specialized Network for Light Field Spatial-Angular Super-Resolution. 光场空间角超分辨率的领域专用网络学习。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3644930
Yifan Mao, Xinpeng Huang, Yilei Chen, Deyang Liu, Ping An, Sanghoon Lee

Light field (LF) imaging is inherently constrained by the trade-off between spatial resolution and angular sampling density. To overcome this obstacle, spatial-angular super-resolution (SR) methods have been developed to achieve concurrent enhancement in both dimensions. Traditional spatial-angular SR methods treat spatial and angular SR as separate tasks, resulting in parameter redundancy and error accumulation. While recent end-to-end approaches attempt joint processing, their uniform treatment of these distinct problems overlooks critical domain-specific requirements. To address these challenges, we propose a domain-specialized framework that deploys stage-tailored strategies to satisfy domain-specific demands. Specifically, in the angular SR stage, we introduce a cross-view consistency modulation module that enhances inter-view coherence through long-range dependency modeling of angular features. In the spatial SR stage, we propose a detail-aware state space model to reconstruct fine-grained detail. Finally, we develop a cross-domain integration module that explores spatial-angular correlations by fusing multi-representational features from both domains to foster synergistic optimization. Experimental results on public LF datasets demonstrate substantial improvements over state-of-the-art methods in both qualitative and quantitative comparisons, with approximately 50% fewer model parameters compared to competing methods.

光场成像受到空间分辨率和角度采样密度之间权衡的固有约束。为了克服这一障碍,空间角超分辨率(SR)方法被开发出来,以实现两个维度的并行增强。传统的空间-角度SR方法将空间和角度SR作为独立任务处理,导致参数冗余和误差积累。虽然最近的端到端方法尝试联合处理,但它们对这些不同问题的统一处理忽略了关键的领域特定需求。为了应对这些挑战,我们提出了一个领域专门化的框架,该框架部署了阶段定制的策略来满足领域特定的需求。具体来说,在角度SR阶段,我们引入了一个跨视图一致性调制模块,通过对角度特征的远程依赖建模来增强视图间的相干性。在空间SR阶段,我们提出了一个细节感知的状态空间模型来重建细粒度的细节。最后,我们开发了一个跨领域集成模块,该模块通过融合来自两个领域的多表征特征来探索空间-角度相关性,以促进协同优化。在公共LF数据集上的实验结果表明,与最先进的方法相比,在定性和定量比较方面都有了实质性的改进,与竞争方法相比,模型参数减少了约50%。
{"title":"Learning a Domain-Specialized Network for Light Field Spatial-Angular Super-Resolution.","authors":"Yifan Mao, Xinpeng Huang, Yilei Chen, Deyang Liu, Ping An, Sanghoon Lee","doi":"10.1109/TVCG.2025.3644930","DOIUrl":"10.1109/TVCG.2025.3644930","url":null,"abstract":"<p><p>Light field (LF) imaging is inherently constrained by the trade-off between spatial resolution and angular sampling density. To overcome this obstacle, spatial-angular super-resolution (SR) methods have been developed to achieve concurrent enhancement in both dimensions. Traditional spatial-angular SR methods treat spatial and angular SR as separate tasks, resulting in parameter redundancy and error accumulation. While recent end-to-end approaches attempt joint processing, their uniform treatment of these distinct problems overlooks critical domain-specific requirements. To address these challenges, we propose a domain-specialized framework that deploys stage-tailored strategies to satisfy domain-specific demands. Specifically, in the angular SR stage, we introduce a cross-view consistency modulation module that enhances inter-view coherence through long-range dependency modeling of angular features. In the spatial SR stage, we propose a detail-aware state space model to reconstruct fine-grained detail. Finally, we develop a cross-domain integration module that explores spatial-angular correlations by fusing multi-representational features from both domains to foster synergistic optimization. Experimental results on public LF datasets demonstrate substantial improvements over state-of-the-art methods in both qualitative and quantitative comparisons, with approximately 50% fewer model parameters compared to competing methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"2127-2140"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145776536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEGS: Deformable Event-Based 3D Gaussian Splatting From RGB and Event Stream. DEGS:可变形的基于事件的3D高斯飞溅从RGB和事件流。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3618768
Junhao He, Jiaxu Wang, Jia Li, Mingyuan Sun, Qiang Zhang, Jiahang Cao, Ziyi Zhang, Yi Gu, Jingkai Sun, Renjing Xu

Reconstructing Dynamic 3D Gaussian Splatting (3DGS) from low-framerate RGB videos is challenging. This is because large inter-frame motions will increase the uncertainty of the solution space. For example, one pixel in the first frame might have more choices to reach the corresponding pixel in the second frame. Event cameras can asynchronously capture rapid visual changes and are robust to motion blur, but they do not provide color information. Intuitively, the event stream can provide deterministic constraints for the inter-frame large motion by the event trajectories. Hence, combining low-temporal resolution images with high-framerate event streams can address this challenge. However, it is challenging to jointly optimize Dynamic 3DGS using both RGB and event modalities due to the significant discrepancy between these two data modalities. This paper introduces a novel framework that jointly optimizes dynamic 3DGS from the two modalities. The key idea is to adopt event motion priors to guide the optimization of the deformation fields. First, we extract the motion priors encoded in event streams by using the proposed LoCM unsupervised fine-tuning framework to adapt an event flow estimator to a certain unseen scene. Then, we present the geometry-aware data association method to build the event-Gaussian motion correspondence, which is the primary foundation of the pipeline, accompanied by two useful strategies, namely motion decomposition and inter-frame pseudo-label. Extensive experiments show that our method outperforms existing image and event-based approaches across synthetic and real scenes and prove that our method can effectively optimize dynamic 3DGS with the help of event data.

从低帧率RGB视频中重建动态3D高斯飞溅(3DGS)是具有挑战性的。这是因为大的帧间运动将增加解空间的不确定性。例如,第一帧中的一个像素可能有更多的选择来到达第二帧中的相应像素。事件相机可以异步捕捉快速的视觉变化,并且对运动模糊具有鲁棒性,但它们不提供颜色信息。直观地说,事件流可以通过事件轨迹为帧间大运动提供确定性约束。因此,将低时间分辨率图像与高帧率事件流相结合可以解决这一挑战。然而,由于RGB和事件两种数据模式之间存在显著差异,因此联合优化动态3DGS具有一定的挑战性。本文介绍了一种从两种模式联合优化动态3DGS的新框架。其关键思想是采用事件运动先验来指导变形场的优化。首先,我们使用所提出的LoCM无监督微调框架提取事件流中编码的运动先验,使事件流估计器适应某个未知场景。然后,我们提出了基于几何感知的数据关联方法来建立事件-高斯运动对应关系,这是管道的主要基础,并伴随着两种有用的策略,即运动分解和帧间伪标签。大量的实验表明,我们的方法在合成场景和真实场景中都优于现有的基于图像和事件的方法,并且证明了我们的方法可以在事件数据的帮助下有效地优化动态3DGS。
{"title":"DEGS: Deformable Event-Based 3D Gaussian Splatting From RGB and Event Stream.","authors":"Junhao He, Jiaxu Wang, Jia Li, Mingyuan Sun, Qiang Zhang, Jiahang Cao, Ziyi Zhang, Yi Gu, Jingkai Sun, Renjing Xu","doi":"10.1109/TVCG.2025.3618768","DOIUrl":"10.1109/TVCG.2025.3618768","url":null,"abstract":"<p><p>Reconstructing Dynamic 3D Gaussian Splatting (3DGS) from low-framerate RGB videos is challenging. This is because large inter-frame motions will increase the uncertainty of the solution space. For example, one pixel in the first frame might have more choices to reach the corresponding pixel in the second frame. Event cameras can asynchronously capture rapid visual changes and are robust to motion blur, but they do not provide color information. Intuitively, the event stream can provide deterministic constraints for the inter-frame large motion by the event trajectories. Hence, combining low-temporal resolution images with high-framerate event streams can address this challenge. However, it is challenging to jointly optimize Dynamic 3DGS using both RGB and event modalities due to the significant discrepancy between these two data modalities. This paper introduces a novel framework that jointly optimizes dynamic 3DGS from the two modalities. The key idea is to adopt event motion priors to guide the optimization of the deformation fields. First, we extract the motion priors encoded in event streams by using the proposed LoCM unsupervised fine-tuning framework to adapt an event flow estimator to a certain unseen scene. Then, we present the geometry-aware data association method to build the event-Gaussian motion correspondence, which is the primary foundation of the pipeline, accompanied by two useful strategies, namely motion decomposition and inter-frame pseudo-label. Extensive experiments show that our method outperforms existing image and event-based approaches across synthetic and real scenes and prove that our method can effectively optimize dynamic 3DGS with the help of event data.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1698-1712"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145276920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly-Supervised Shape Multi-Completion of Point Clouds by Structural Decomposition. 基于结构分解的点云弱监督形状多补全。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3636413
Changfeng Ma, Pengxiao Guo, Shuangyu Yang, Yuanqi Li, Jie Guo, Chongjun Wang, Yanwen Guo

The challenge of transforming partial point clouds into complete meshes still persists, with current methods facing issues like data accessibility constraint, shape preservation failure and poor robustness on real-scan data. Drawing inspiration from the structural information of objects to enhance the completion, we introduce an innovative weakly-supervised shape completion method leveraging structural decomposition without the necessity of SDFs during training. By representing objects as abstract structural frameworks and part details, our method initiates by forecasting the structure of the input partial point clouds, and individually restore each component through part decomposition completion and generation. Extracted part details are represented in images, which are porous and incomplete. Hence, we utilize a completion network to complete such details. For multiple results generation, a diffusion-based generation network is employed to generate a variety of details for the missing areas. The predicted structure and details are subsequently converted back into meshes, yielding the complete results. Since the details are depicted in images, our approach eliminates the need for SDFs during the training phase, achieving weakly-supervision. We conduct extensive comparisons on both artificial and real-scan datasets, demonstrating an average improvement of over 38.1% compared to the prior method, and achieving SOTA performance.

将部分点云转换为完整网格的挑战仍然存在,目前的方法面临着数据可访问性约束、形状保持失败以及对真实扫描数据鲁棒性差等问题。从物体的结构信息中汲取灵感,提高补全度,我们引入了一种创新的弱监督形状补全方法,利用结构分解,而不需要在训练过程中使用sdf。该方法通过将对象表示为抽象的结构框架和部件细节,首先预测输入部分点云的结构,然后通过部件分解完成和生成分别恢复每个组件。提取的零件细节以图像的形式表示,图像是多孔的、不完整的。因此,我们利用补全网络来完成这些细节。对于多结果生成,采用基于扩散的生成网络对缺失区域生成各种细节。预测的结构和细节随后被转换回网格,得到完整的结果。由于细节是在图像中描述的,我们的方法在训练阶段消除了对sdf的需求,实现了弱监督。我们在人工和真实扫描数据集上进行了广泛的比较,表明与之前的方法相比,平均改进超过38.1%,并实现了SOTA性能。
{"title":"Weakly-Supervised Shape Multi-Completion of Point Clouds by Structural Decomposition.","authors":"Changfeng Ma, Pengxiao Guo, Shuangyu Yang, Yuanqi Li, Jie Guo, Chongjun Wang, Yanwen Guo","doi":"10.1109/TVCG.2025.3636413","DOIUrl":"10.1109/TVCG.2025.3636413","url":null,"abstract":"<p><p>The challenge of transforming partial point clouds into complete meshes still persists, with current methods facing issues like data accessibility constraint, shape preservation failure and poor robustness on real-scan data. Drawing inspiration from the structural information of objects to enhance the completion, we introduce an innovative weakly-supervised shape completion method leveraging structural decomposition without the necessity of SDFs during training. By representing objects as abstract structural frameworks and part details, our method initiates by forecasting the structure of the input partial point clouds, and individually restore each component through part decomposition completion and generation. Extracted part details are represented in images, which are porous and incomplete. Hence, we utilize a completion network to complete such details. For multiple results generation, a diffusion-based generation network is employed to generate a variety of details for the missing areas. The predicted structure and details are subsequently converted back into meshes, yielding the complete results. Since the details are depicted in images, our approach eliminates the need for SDFs during the training phase, achieving weakly-supervision. We conduct extensive comparisons on both artificial and real-scan datasets, demonstrating an average improvement of over 38.1% compared to the prior method, and achieving SOTA performance.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"2114-2126"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145598409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1