首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
Hierarchical Bayesian Guided Spatial-, Angular- and Temporal-Consistent View Synthesis. 层次贝叶斯引导的空间、角度和时间一致视图合成。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3631702
Junyu Zhu, Hao Zhu, Sheng Wang, Zhan Ma, Xun Cao

Neural Radiance Fields (NeRF) have gained significant attention due to their precise reconstruction and rapid inference capabilities, making them highly promising for applications in virtual reality and gaming. However, extending NeRF's capabilities to dynamic scenes remains underexplored, particularly in ensuring consistent and coherent reconstructions across space, time, and viewing angles. To address this challenge, we propose Scale-NeRF, a novel approach that organizes the training of dynamic NeRFs as a progressive, scale-based refinement process, grounded in hierarchical Bayesian theory. Scale-NeRF begins by reconstructing the radiance fields using coarse, large-scale frames and iteratively refines them with progressively smaller-scale frames. This hierarchical strategy, combined with a corresponding sampling approach and a newly introduced structural loss, ensures consistency and integrity throughout the reconstruction process. Experiments on public datasets validate the superiority of Scale-NeRF over traditional methods, especially in terms of the proposed metrics evaluating spatial, angular, and temporal consistency. Furthermore, Scale-NeRF demonstrates excellent dynamic reconstruction capabilities with real-time rendering, offering a significant advancement for applications demanding both high fidelity and real-time performance.

神经辐射场(NeRF)由于其精确的重建和快速的推理能力而获得了极大的关注,使其在虚拟现实和游戏中的应用具有很大的前景。然而,将NeRF的能力扩展到动态场景仍然有待探索,特别是在确保跨空间、时间和视角的一致和连贯的重建方面。为了应对这一挑战,我们提出了Scale-NeRF,这是一种新颖的方法,将动态nerf的训练组织为一个渐进的、基于规模的细化过程,以层次贝叶斯理论为基础。Scale-NeRF首先使用粗的、大规模的帧重建辐射场,然后用逐渐小尺度的帧迭代地改进它们。这种分层策略与相应的采样方法和新引入的结构损失相结合,确保了整个重建过程的一致性和完整性。在公共数据集上的实验验证了Scale-NeRF相对于传统方法的优越性,特别是在评估空间、角度和时间一致性方面。此外,Scale-NeRF在实时渲染中展示了出色的动态重建能力,为要求高保真度和实时性能的应用提供了重大进步。
{"title":"Hierarchical Bayesian Guided Spatial-, Angular- and Temporal-Consistent View Synthesis.","authors":"Junyu Zhu, Hao Zhu, Sheng Wang, Zhan Ma, Xun Cao","doi":"10.1109/TVCG.2025.3631702","DOIUrl":"10.1109/TVCG.2025.3631702","url":null,"abstract":"<p><p>Neural Radiance Fields (NeRF) have gained significant attention due to their precise reconstruction and rapid inference capabilities, making them highly promising for applications in virtual reality and gaming. However, extending NeRF's capabilities to dynamic scenes remains underexplored, particularly in ensuring consistent and coherent reconstructions across space, time, and viewing angles. To address this challenge, we propose Scale-NeRF, a novel approach that organizes the training of dynamic NeRFs as a progressive, scale-based refinement process, grounded in hierarchical Bayesian theory. Scale-NeRF begins by reconstructing the radiance fields using coarse, large-scale frames and iteratively refines them with progressively smaller-scale frames. This hierarchical strategy, combined with a corresponding sampling approach and a newly introduced structural loss, ensures consistency and integrity throughout the reconstruction process. Experiments on public datasets validate the superiority of Scale-NeRF over traditional methods, especially in terms of the proposed metrics evaluating spatial, angular, and temporal consistency. Furthermore, Scale-NeRF demonstrates excellent dynamic reconstruction capabilities with real-time rendering, offering a significant advancement for applications demanding both high fidelity and real-time performance.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1438-1451"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145508619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Far is Too Far? The Trade-Off Between Selection Distance and Accuracy During Teleportation in Immersive Virtual Reality. 多远才算太远?沉浸式虚拟现实中隐形传态选择距离与精度的权衡
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3632345
Daniel Rupp, Tim Weissker, Matthias Wolwer, Torsten W Kuhlen, Daniel Zielasko

Target-selection-based teleportation is one of the most widely used and researched travel techniques in immersive virtual environments, requiring the user to specify a target location with a selection ray before being transported there. This work explores the influence of the maximum reach of the parabolic selection ray, modeled by different emission velocities of the projectile motion equation, and compares the resulting teleportation performance to a straight ray as the baseline. In a user study with 60 participants, we asked participants to teleport as far as possible while still remaining within accuracy constraints to understand how the theoretical implications of the projectile motion equation apply to a realistic VR use case. We found that a projectile emission velocity of $14 frac{m}{s}$14ms (resulting in a maximal reach of $text{21.52 m}$21.52m) offered the best trade-off between selection distance and accuracy, with an inferior performance of the straight ray. Our results demonstrate the necessity to carefully set and report the projectile emission velocity in future work, as it was shown to directly influence user-selected distance, selection errors, and controller height during selection.

基于目标选择的隐形传态是沉浸式虚拟环境中应用最广泛和研究最多的一种旅行技术,它要求用户在被传送到目标位置之前使用选择射线指定目标位置。这项工作探讨了抛物选择射线的最大到达范围的影响,通过抛物运动方程的不同发射速度建模,并将由此产生的隐形传态性能与直线射线作为基线进行了比较。在一项有60名参与者的用户研究中,我们要求参与者在保持精度限制的情况下尽可能地传送,以了解抛射运动方程的理论含义如何适用于现实的VR用例。我们发现,弹丸发射速度为$14 frac{m}{s}$(导致最大到达$21.52 m$)在选择距离和精度之间提供了最佳折衷,而直线射线的性能较差。我们的研究结果表明,在未来的工作中,有必要仔细设置和报告弹丸发射速度,因为它直接影响用户选择的距离、选择误差和选择过程中的控制器高度。
{"title":"How Far is Too Far? The Trade-Off Between Selection Distance and Accuracy During Teleportation in Immersive Virtual Reality.","authors":"Daniel Rupp, Tim Weissker, Matthias Wolwer, Torsten W Kuhlen, Daniel Zielasko","doi":"10.1109/TVCG.2025.3632345","DOIUrl":"10.1109/TVCG.2025.3632345","url":null,"abstract":"<p><p>Target-selection-based teleportation is one of the most widely used and researched travel techniques in immersive virtual environments, requiring the user to specify a target location with a selection ray before being transported there. This work explores the influence of the maximum reach of the parabolic selection ray, modeled by different emission velocities of the projectile motion equation, and compares the resulting teleportation performance to a straight ray as the baseline. In a user study with 60 participants, we asked participants to teleport as far as possible while still remaining within accuracy constraints to understand how the theoretical implications of the projectile motion equation apply to a realistic VR use case. We found that a projectile emission velocity of $14 frac{m}{s}$14ms (resulting in a maximal reach of $text{21.52 m}$21.52m) offered the best trade-off between selection distance and accuracy, with an inferior performance of the straight ray. Our results demonstrate the necessity to carefully set and report the projectile emission velocity in future work, as it was shown to directly influence user-selected distance, selection errors, and controller height during selection.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1864-1878"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145524848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Make the Fastest Faster: Importance Mask Synthesis for Interactive Volume Visualization Using Reconstruction Neural Networks. 使最快更快:使用重建神经网络进行交互式体可视化的重要性掩模合成。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3621079
Jianxin Sun, David Lenz, Hongfeng Yu, Tom Peterka

Visualizing a large-scale volumetric dataset with high resolution is challenging due to the substantial computational time and space complexity. Recent deep learning-based image inpainting methods significantly improve rendering latency by reconstructing a high-resolution image for visualization in constant time on GPU from a partially rendered image where only a portion of pixels go through the expensive rendering pipeline. However, existing solutions need to render every pixel of either a predefined regular sampling pattern or an irregular sample pattern predicted from a low-resolution image rendering. Both methods require a significant amount of expensive pixel-level rendering. In this work, we provide Importance Mask Learning (IML) and Synthesis (IMS) networks, which are the first attempts to directly synthesize important regions of the regular sampling pattern from the user's view parameters, to further minimize the number of pixels to render by jointly considering the dataset, user behavior, and the downstream reconstruction neural network. Our solution is a unified framework to handle various types of inpainting methods through the proposed differentiable compaction/decompaction layers. Experiments show our method can further improve the overall rendering latency of state-of-the-art volume visualization methods using reconstruction neural network for free when rendering scientific volumetric datasets. Our method can also directly optimize the off-the-shelf pre-trained reconstruction neural networks without elongated retraining.

由于大量的计算时间和空间复杂性,高分辨率的大规模体积数据集的可视化具有挑战性。最近基于深度学习的图像绘制方法通过在GPU上以恒定时间从部分渲染的图像重建高分辨率图像来显着改善渲染延迟,其中只有一部分像素通过昂贵的渲染管道。然而,现有的解决方案需要呈现预定义的规则采样模式或从低分辨率图像呈现预测的不规则采样模式的每个像素。这两种方法都需要大量昂贵的像素级渲染。在这项工作中,我们提供了重要性掩码学习(IML)和综合(IMS)网络,这是第一次尝试直接从用户的视图参数中合成规则采样模式的重要区域,通过联合考虑数据集、用户行为和下游重建神经网络,进一步减少要渲染的像素数量。我们的解决方案是一个统一的框架,通过提出的可微分压缩/分解层来处理各种类型的绘制方法。实验表明,该方法可以进一步改善基于重建神经网络的最先进的体可视化方法在绘制科学体数据集时的整体渲染延迟。我们的方法也可以直接优化现成的预训练重建神经网络,而不需要长时间的再训练。
{"title":"Make the Fastest Faster: Importance Mask Synthesis for Interactive Volume Visualization Using Reconstruction Neural Networks.","authors":"Jianxin Sun, David Lenz, Hongfeng Yu, Tom Peterka","doi":"10.1109/TVCG.2025.3621079","DOIUrl":"10.1109/TVCG.2025.3621079","url":null,"abstract":"<p><p>Visualizing a large-scale volumetric dataset with high resolution is challenging due to the substantial computational time and space complexity. Recent deep learning-based image inpainting methods significantly improve rendering latency by reconstructing a high-resolution image for visualization in constant time on GPU from a partially rendered image where only a portion of pixels go through the expensive rendering pipeline. However, existing solutions need to render every pixel of either a predefined regular sampling pattern or an irregular sample pattern predicted from a low-resolution image rendering. Both methods require a significant amount of expensive pixel-level rendering. In this work, we provide Importance Mask Learning (IML) and Synthesis (IMS) networks, which are the first attempts to directly synthesize important regions of the regular sampling pattern from the user's view parameters, to further minimize the number of pixels to render by jointly considering the dataset, user behavior, and the downstream reconstruction neural network. Our solution is a unified framework to handle various types of inpainting methods through the proposed differentiable compaction/decompaction layers. Experiments show our method can further improve the overall rendering latency of state-of-the-art volume visualization methods using reconstruction neural network for free when rendering scientific volumetric datasets. Our method can also directly optimize the off-the-shelf pre-trained reconstruction neural networks without elongated retraining.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1481-1496"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145288006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analytical Texture Mapping. 分析纹理映射。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3611315
Koen Meinds, Elmar Eisemann

Resampling of warped images has been a topic of research for a long time but only seldomly has focused on theoretically exact resampling. We present a resampling method for minification, applied on the texture mapping function of a 3D graphics pipeline, that is derived from sampling theory without making any approximations. Our method supports freely selectable 2D integratable prefilter (anti-aliasing) functions and uses a 2D box reconstruction filter. We have implemented our method both for CPU and GPU (OpenGL) using multiple prefilter functions defined by piece-wise polynomials. The correctness of our exact resampling method has been made plausible by comparing texture mapping results of our method with those of extreme supersampling. We additionally show how the prefilter of our method can also be applied for high quality polygon edge anti-aliasing. Since our proposed method does not use any approximations, up to numerical precision, it can be used as a reference for approximate texture mapping methods.

畸变图像的重采样是一个长期以来的研究课题,但很少关注理论上精确的重采样。本文提出了一种基于采样理论的最小化重采样方法,应用于三维图形管道的纹理映射函数,该方法不做任何近似。我们的方法支持可自由选择的二维可积预滤波器(抗混叠)函数,并使用二维盒重构滤波器。我们使用由分段多项式定义的多个预滤波函数实现了CPU和GPU (OpenGL)的方法。将精确重采样方法的纹理映射结果与极端超采样方法的纹理映射结果进行比较,验证了精确重采样方法的正确性。我们还展示了我们的方法的预滤波器也可以应用于高质量的多边形边缘抗混叠。由于该方法不使用任何近似,达到数值精度,可以作为近似纹理映射方法的参考。
{"title":"Analytical Texture Mapping.","authors":"Koen Meinds, Elmar Eisemann","doi":"10.1109/TVCG.2025.3611315","DOIUrl":"10.1109/TVCG.2025.3611315","url":null,"abstract":"<p><p>Resampling of warped images has been a topic of research for a long time but only seldomly has focused on theoretically exact resampling. We present a resampling method for minification, applied on the texture mapping function of a 3D graphics pipeline, that is derived from sampling theory without making any approximations. Our method supports freely selectable 2D integratable prefilter (anti-aliasing) functions and uses a 2D box reconstruction filter. We have implemented our method both for CPU and GPU (OpenGL) using multiple prefilter functions defined by piece-wise polynomials. The correctness of our exact resampling method has been made plausible by comparing texture mapping results of our method with those of extreme supersampling. We additionally show how the prefilter of our method can also be applied for high quality polygon edge anti-aliasing. Since our proposed method does not use any approximations, up to numerical precision, it can be used as a reference for approximate texture mapping methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1941-1950"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expanding Access to Science Participation: A FAIR Framework for Petascale Data Visualization and Analytics. 扩大科学参与:千兆级数据可视化和分析的公平框架。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3642878
Aashish Panta, Alper Sahistan, Xuan Huang, Amy A Gooch, Giorgio Scorzelli, Hector Torres, Patrice Klein, Gustavo A Ovando-Montejo, Peter Lindstrom, Valerio Pascucci

The massive data generated by scientists daily serve as both a major catalyst for new discoveries and innovations, as well as a significant roadblock that restricts access to the data. Our paper introduces a new approach to removing Big Data barriers and democratizing access to petascale data for the broader scientific community. Our novel data fabric abstraction layer allows user-friendly querying of scientific information while hiding the complexities of dealing with file systems or cloud services. We enable FAIR (Findable, Accessible, Interoperable, and Reusable) access to datasets such as NASA's petascale climate datasets. Our paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our novel data fabric abstraction utilizes state-of-the art progressive compression algorithms and machine-learning insights to power scalable visualization dashboards for petascale data. The result provides users with the ability to identify extreme events or trends dynamically, expanding access to scientific data and further enabling discoveries. We validate our approach by improving the ability of climate scientists to visually explore their data via three fully interactive dashboards. We further validate our approach by deploying the dashboards and simplified training materials in the classroom at a minority-serving institution. These dashboards, released in simplified form to the general public, contribute significantly to a broader push to democratize the access and use of climate data.

科学家每天产生的大量数据既是新发现和创新的主要催化剂,也是限制获取数据的重要障碍。我们的论文介绍了一种新的方法来消除大数据障碍,并为更广泛的科学界民主化访问千兆级数据。我们新颖的数据结构抽象层允许用户友好地查询科学信息,同时隐藏了处理文件系统或云服务的复杂性。我们允许FAIR(可查找、可访问、可互操作和可重用)访问数据集,例如NASA的千万亿次气候数据集。我们的论文提出了一种管理、可视化和分析浏览器中pb级数据的方法,这些设备从美国国家航空航天局(NASA)的顶级超级计算机到笔记本电脑等普通硬件。我们新颖的数据结构抽象利用最先进的渐进式压缩算法和机器学习洞察力,为千兆级数据提供可扩展的可视化仪表板。结果为用户提供了动态识别极端事件或趋势的能力,扩大了对科学数据的访问,并进一步实现了发现。我们通过提高气候科学家通过三个完全互动的仪表板直观地探索数据的能力来验证我们的方法。我们通过在少数族裔服务机构的课堂上部署仪表板和简化的培训材料,进一步验证了我们的方法。这些以简化形式向公众发布的仪表板,对更广泛地推动气候数据的获取和使用民主化作出了重大贡献。
{"title":"Expanding Access to Science Participation: A FAIR Framework for Petascale Data Visualization and Analytics.","authors":"Aashish Panta, Alper Sahistan, Xuan Huang, Amy A Gooch, Giorgio Scorzelli, Hector Torres, Patrice Klein, Gustavo A Ovando-Montejo, Peter Lindstrom, Valerio Pascucci","doi":"10.1109/TVCG.2025.3642878","DOIUrl":"10.1109/TVCG.2025.3642878","url":null,"abstract":"<p><p>The massive data generated by scientists daily serve as both a major catalyst for new discoveries and innovations, as well as a significant roadblock that restricts access to the data. Our paper introduces a new approach to removing Big Data barriers and democratizing access to petascale data for the broader scientific community. Our novel data fabric abstraction layer allows user-friendly querying of scientific information while hiding the complexities of dealing with file systems or cloud services. We enable FAIR (Findable, Accessible, Interoperable, and Reusable) access to datasets such as NASA's petascale climate datasets. Our paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our novel data fabric abstraction utilizes state-of-the art progressive compression algorithms and machine-learning insights to power scalable visualization dashboards for petascale data. The result provides users with the ability to identify extreme events or trends dynamically, expanding access to scientific data and further enabling discoveries. We validate our approach by improving the ability of climate scientists to visually explore their data via three fully interactive dashboards. We further validate our approach by deploying the dashboards and simplified training materials in the classroom at a minority-serving institution. These dashboards, released in simplified form to the general public, contribute significantly to a broader push to democratize the access and use of climate data.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1806-1821"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deterministic Point Cloud Diffusion for Denoising. 确定性点云扩散去噪。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3621633
Zheng Liu, Zhenyu Huang, Maodong Pan, Ying He

Diffusion-based generative models have achieved remarkable success in image restoration by learning to iteratively refine noisy data toward clean signals. Inspired by this progress, recent efforts have begun exploring their potential in 3D domains. However, applying diffusion models to point cloud denoising introduces several challenges. Unlike images, clean and noisy point clouds are characterized by structured displacements. As a result, it is unsuitable to establish a transform mapping in the forward phase by diffusing Gaussian noise, as this approach disregards the inherent geometric relationship between the point sets. Furthermore, the stochastic nature of Gaussian noise introduces additional complexity, complicating geometric reasoning and hindering surface recovery during the reverse denoising process. In this paper, we introduce a deterministic noise-free diffusion framework that formulates point cloud denoising as a two-phase residual diffusion process. In the forward phase, directional residuals are injected into clean surfaces to construct a degradation trajectory that encodes both local displacements and their global evolution. In the reverse phase, a U-Net-based network iteratively estimates and removes these residuals, effectively retracing the degradation path backward to recover the underlying surface. By decomposing the denoising task into directional residual computation and sequential refinement, our method enables faithful surface recovery while mitigating common artifacts such as over-smoothing and under-smoothing. Extensive experiments on synthetic and real-world datasets demonstrate that our method achieves state-of-the-art performance in both quantitative metrics and visual quality.

基于扩散的生成模型通过学习迭代地将噪声数据细化为干净信号,在图像恢复中取得了显著的成功。受到这一进展的启发,最近的努力已经开始探索它们在3D领域的潜力。然而,将扩散模型应用于点云去噪会带来一些挑战。与图像不同,干净和嘈杂的点云具有结构化位移的特征。因此,不考虑点集之间固有的几何关系,不适合通过扩散高斯噪声的方法在前向建立变换映射。此外,高斯噪声的随机性引入了额外的复杂性,使几何推理复杂化,并阻碍了反向去噪过程中的表面恢复。在本文中,我们引入了一个确定性的无噪声扩散框架,该框架将点云去噪描述为一个两阶段的残余扩散过程。在正向阶段,将定向残差注入清洁表面,以构建一个降解轨迹,该轨迹既编码局部位移,也编码其全局演化。在反向阶段,基于u - net的网络迭代估计并去除这些残差,有效地回溯退化路径以恢复下表面。通过将去噪任务分解为方向残差计算和顺序细化,我们的方法能够忠实地恢复表面,同时减轻常见的伪影,如过平滑和欠平滑。在合成和真实世界数据集上进行的大量实验表明,我们的方法在定量指标和视觉质量方面都达到了最先进的性能。我们的源代码可从https://github.com/huangzygiti/DPCD获得。
{"title":"Deterministic Point Cloud Diffusion for Denoising.","authors":"Zheng Liu, Zhenyu Huang, Maodong Pan, Ying He","doi":"10.1109/TVCG.2025.3621633","DOIUrl":"10.1109/TVCG.2025.3621633","url":null,"abstract":"<p><p>Diffusion-based generative models have achieved remarkable success in image restoration by learning to iteratively refine noisy data toward clean signals. Inspired by this progress, recent efforts have begun exploring their potential in 3D domains. However, applying diffusion models to point cloud denoising introduces several challenges. Unlike images, clean and noisy point clouds are characterized by structured displacements. As a result, it is unsuitable to establish a transform mapping in the forward phase by diffusing Gaussian noise, as this approach disregards the inherent geometric relationship between the point sets. Furthermore, the stochastic nature of Gaussian noise introduces additional complexity, complicating geometric reasoning and hindering surface recovery during the reverse denoising process. In this paper, we introduce a deterministic noise-free diffusion framework that formulates point cloud denoising as a two-phase residual diffusion process. In the forward phase, directional residuals are injected into clean surfaces to construct a degradation trajectory that encodes both local displacements and their global evolution. In the reverse phase, a U-Net-based network iteratively estimates and removes these residuals, effectively retracing the degradation path backward to recover the underlying surface. By decomposing the denoising task into directional residual computation and sequential refinement, our method enables faithful surface recovery while mitigating common artifacts such as over-smoothing and under-smoothing. Extensive experiments on synthetic and real-world datasets demonstrate that our method achieves state-of-the-art performance in both quantitative metrics and visual quality.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1822-1834"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145310424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reimagining Disassembly Interfaces With Visualization: Combining Instruction Tracing and Control Flow With DisViz. 用可视化重新构想反汇编接口:指令跟踪和控制流与DisViz的结合。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3627171
Shadmaan Hye, Matthew P LeGendre, Katherine E Isaacs

In applications where efficiency is critical, developers may examine their compiled binaries, seeking to understand how the compiler transformed their source code and what performance implications that transformation may have. This analysis is challenging due to the vast number of disassembled binary instructions and the many-to-many mappings between them and the source code. These problems are exacerbated as source code size increases, giving the compiler more freedom to map and disperse binary instructions across the disassembly space. Interfaces for disassembly typically display instructions as an unstructured listing or sacrifice the order of execution. We design a new visual interface for disassembly code that combines execution order with control flow structure, enabling analysts to both trace through code and identify familiar aspects of the computation. Central to our approach is a novel layout of instructions grouped into basic blocks that displays a looping structure in an intuitive way. We add to this disassembly representation a unique block-based mini-map that leverages our layout and shows context across thousands of disassembly instructions. Finally, we embed our disassembly visualization in a web-based tool, DisViz, which adds dynamic linking with source code across the entire application. DizViz was developed in collaboration with program analysis experts following design study methodology and was validated through evaluation sessions with ten participants from four institutions. Participants successfully completed the evaluation tasks, hypothesized about compiler optimizations, and noted the utility of our new disassembly view. Our evaluation suggests that our new integrated view helps application developers in understanding and navigating disassembly code.

在效率至关重要的应用程序中,开发人员可能会检查已编译的二进制文件,试图了解编译器是如何转换源代码的,以及这种转换可能带来的性能影响。由于大量反汇编的二进制指令以及它们与源代码之间的多对多映射,这种分析具有挑战性。这些问题随着源代码大小的增加而加剧,这给了编译器在反汇编空间中映射和分散二进制指令更多的自由。反汇编接口通常将指令显示为非结构化列表或牺牲执行顺序。我们为反汇编代码设计了一个新的可视化界面,它结合了执行顺序和控制流结构,使分析人员既可以跟踪代码,又可以识别计算的熟悉方面。我们方法的核心是一种新颖的指令布局,将指令分组到基本块中,以直观的方式显示循环结构。我们为这个拆卸表示添加了一个独特的基于块的迷你地图,它利用我们的布局并显示数千个拆卸指令的上下文。最后,我们将反汇编可视化嵌入到基于web的工具DisViz中,该工具在整个应用程序中添加了与源代码的动态链接。DizViz是与项目分析专家根据设计研究方法合作开发的,并通过来自四个机构的10名参与者的评估会议进行了验证。参与者成功地完成了评估任务,对编译器优化进行了假设,并注意到我们的新反汇编视图的实用性。我们的评估表明,我们新的集成视图有助于应用程序开发人员理解和导航反汇编代码。
{"title":"Reimagining Disassembly Interfaces With Visualization: Combining Instruction Tracing and Control Flow With DisViz.","authors":"Shadmaan Hye, Matthew P LeGendre, Katherine E Isaacs","doi":"10.1109/TVCG.2025.3627171","DOIUrl":"10.1109/TVCG.2025.3627171","url":null,"abstract":"<p><p>In applications where efficiency is critical, developers may examine their compiled binaries, seeking to understand how the compiler transformed their source code and what performance implications that transformation may have. This analysis is challenging due to the vast number of disassembled binary instructions and the many-to-many mappings between them and the source code. These problems are exacerbated as source code size increases, giving the compiler more freedom to map and disperse binary instructions across the disassembly space. Interfaces for disassembly typically display instructions as an unstructured listing or sacrifice the order of execution. We design a new visual interface for disassembly code that combines execution order with control flow structure, enabling analysts to both trace through code and identify familiar aspects of the computation. Central to our approach is a novel layout of instructions grouped into basic blocks that displays a looping structure in an intuitive way. We add to this disassembly representation a unique block-based mini-map that leverages our layout and shows context across thousands of disassembly instructions. Finally, we embed our disassembly visualization in a web-based tool, DisViz, which adds dynamic linking with source code across the entire application. DizViz was developed in collaboration with program analysis experts following design study methodology and was validated through evaluation sessions with ten participants from four institutions. Participants successfully completed the evaluation tasks, hypothesized about compiler optimizations, and noted the utility of our new disassembly view. Our evaluation suggests that our new integrated view helps application developers in understanding and navigating disassembly code.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1729-1742"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consistent 3D Human Reconstruction From Monocular Video: Learning Correctable Appearance and Temporal Motion Priors. 从单目视频中一致的三维人体重建:学习可纠正的外观和时间运动先验。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3626741
Cheng Shang, Liang An, Tingting Li, Jiajun Zhang, Yuxiang Zhang, Jidong Tian, Yebin Liu, Xubo Yang

Recent advancements in rendering dynamic humans using NeRF and 3D Gaussian splatting have made significant progress, leveraging implicit geometry learning and image appearance rendering to create digital humans. However, in monocular video rendering, there are still challenges in rendering subtle and complex motion from different viewpoints and states, primarily due to the imbalance of viewpoints. Additionally, ensuring continuity between adjacent frames when rendering from novel and free viewpoints remains a difficult task. To address these challenges, we first propose a pixel-level motion correction module that adjusts the errors in the learned representation between different viewpoints. We also introduce a temporal information-based model to improve motion continuity by leveraging adjacent frames. Experimental results on dynamic human rendering, using the NeuMan, ZJU-Mocap, and People-Snapshot datasets, demonstrate that our method outperforms state-of-the-art techniques both quantitatively and qualitatively.

最近在使用NeRF和3D高斯飞溅渲染动态人物方面取得了重大进展,利用隐式几何学习和图像外观渲染来创建数字人物。然而,在单目视频渲染中,由于视点的不平衡,在渲染不同视点和状态下的微妙而复杂的运动时仍然存在挑战。此外,当从新颖和自由视点渲染时,确保相邻帧之间的连续性仍然是一项艰巨的任务。为了解决这些挑战,我们首先提出了一个像素级运动校正模块,用于调整不同视点之间学习表征中的误差。我们还引入了一个基于时间信息的模型,通过利用相邻帧来提高运动连续性。使用NeuMan、ZJU-Mocap和People- Snapshot数据集的动态人体渲染实验结果表明,我们的方法在定量和定性上都优于最先进的技术。
{"title":"Consistent 3D Human Reconstruction From Monocular Video: Learning Correctable Appearance and Temporal Motion Priors.","authors":"Cheng Shang, Liang An, Tingting Li, Jiajun Zhang, Yuxiang Zhang, Jidong Tian, Yebin Liu, Xubo Yang","doi":"10.1109/TVCG.2025.3626741","DOIUrl":"10.1109/TVCG.2025.3626741","url":null,"abstract":"<p><p>Recent advancements in rendering dynamic humans using NeRF and 3D Gaussian splatting have made significant progress, leveraging implicit geometry learning and image appearance rendering to create digital humans. However, in monocular video rendering, there are still challenges in rendering subtle and complex motion from different viewpoints and states, primarily due to the imbalance of viewpoints. Additionally, ensuring continuity between adjacent frames when rendering from novel and free viewpoints remains a difficult task. To address these challenges, we first propose a pixel-level motion correction module that adjusts the errors in the learned representation between different viewpoints. We also introduce a temporal information-based model to improve motion continuity by leveraging adjacent frames. Experimental results on dynamic human rendering, using the NeuMan, ZJU-Mocap, and People-Snapshot datasets, demonstrate that our method outperforms state-of-the-art techniques both quantitatively and qualitatively.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1895-1910"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quality Assessment of 3D Human Animation: Subjective and Objective Evaluation. 三维人体动画质量评价:主观与客观评价。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3631385
Rim Rekik, Stefanie Wuhrer, Ludovic Hoyet, Katja Zibrek, Anne-Helene Olivier

Virtual human animations have a wide range of applications in virtual and augmented reality. While automatic generation methods of animated virtual humans have been developed, assessing their quality remains challenging. Recently, approaches introducing task-oriented evaluation metrics have been proposed, leveraging neural network training. However, quality assessment measures for animated virtual humans not generated with parametric body models have yet to be developed. In this context, we introduce a first such quality assessment measure leveraging a novel data-driven framework. First, we generate a dataset of virtual human animations together with their corresponding subjective realism evaluation scores collected with a user study. Second, we use the resulting dataset to learn predicting perceptual evaluation scores. Results indicate that training a linear regressor on our dataset results in a correlation of 90%, which outperforms a strong deep learning baseline.

虚拟人动画在虚拟现实和增强现实中有着广泛的应用。虽然动画虚拟人的自动生成方法已经开发出来,但评估它们的质量仍然具有挑战性。最近,利用神经网络训练引入面向任务的评价指标的方法被提出。然而,对于非参数化身体模型生成的动画虚拟人的质量评估方法尚未开发。在这种情况下,我们引入了第一个利用新的数据驱动框架的质量评估度量。首先,我们生成一个虚拟人动画数据集,以及通过用户研究收集的相应主观真实感评估分数。其次,我们使用结果数据集来学习预测感知评价分数。结果表明,在我们的数据集上训练线性回归器的相关性为90%,优于强大的深度学习基线。
{"title":"Quality Assessment of 3D Human Animation: Subjective and Objective Evaluation.","authors":"Rim Rekik, Stefanie Wuhrer, Ludovic Hoyet, Katja Zibrek, Anne-Helene Olivier","doi":"10.1109/TVCG.2025.3631385","DOIUrl":"10.1109/TVCG.2025.3631385","url":null,"abstract":"<p><p>Virtual human animations have a wide range of applications in virtual and augmented reality. While automatic generation methods of animated virtual humans have been developed, assessing their quality remains challenging. Recently, approaches introducing task-oriented evaluation metrics have been proposed, leveraging neural network training. However, quality assessment measures for animated virtual humans not generated with parametric body models have yet to be developed. In this context, we introduce a first such quality assessment measure leveraging a novel data-driven framework. First, we generate a dataset of virtual human animations together with their corresponding subjective realism evaluation scores collected with a user study. Second, we use the resulting dataset to learn predicting perceptual evaluation scores. Results indicate that training a linear regressor on our dataset results in a correlation of 90%, which outperforms a strong deep learning baseline.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1780-1792"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145508570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CellScout: Visual Analytics for Mining Biomarkers in Cell State Discovery. CellScout:在细胞状态发现中挖掘生物标志物的可视化分析。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3636102
Rui Sheng, Zelin Zang, Jiachen Wang, Yan Luo, Zixin Chen, Yan Zhou, Shaolun Ruan, Huamin Qu

Cell state discovery is crucial for understanding biological systems and enhancing medical outcomes. A key aspect of this process is identifying distinct biomarkers that define specific cell states. However, difficulties arise from the co-discovery process of cell states and biomarkers: biologists often use dimensionality reduction to visualize cells in a two-dimensional space. Then they usually interpret visually clustered cells as distinct states, from which they seek to identify unique biomarkers. However, this assumption is often this assumption often fails to hold due to internal inconsistencies in a cluster, making the process trial-and-error and highly uncertain. Therefore, biologists urgently need effective tools to help uncover the hidden association relationships between different cell populations and their potential biomarkers. To address this problem, we first designed a machine-learning algorithm based on the Mixture-of-Experts (MoE) technique to identify meaningful associations between cell populations and biomarkers. We further developed a visual analytics system-CellScout-in collaboration with biologists, to help them explore and refine these association relationships to advance cell state discovery. We validated our system through expert interviews, from which we further selected a representative case to demonstrate its effectiveness in discovering new cell states.

细胞状态的发现对于理解生物系统和提高医疗效果至关重要。该过程的一个关键方面是识别定义特定细胞状态的不同生物标志物。然而,困难来自于细胞状态和生物标志物的共同发现过程:生物学家经常使用降维来可视化二维空间中的细胞。然后,他们通常将视觉上聚集的细胞解释为不同的状态,从中寻找独特的生物标志物。然而,由于集群内部的不一致性,这种假设通常是无效的,这使得过程反复试验,并且具有高度的不确定性。因此,生物学家迫切需要有效的工具来帮助揭示不同细胞群体及其潜在生物标志物之间隐藏的关联关系。为了解决这个问题,我们首先设计了一种基于混合专家(MoE)技术的机器学习算法,以识别细胞群和生物标志物之间有意义的关联。我们进一步开发了一个视觉分析系统cellscout -与生物学家合作,帮助他们探索和完善这些关联关系,以推进细胞状态的发现。我们通过专家访谈验证了我们的系统,从中我们进一步选择了一个有代表性的案例来证明它在发现新的细胞状态方面的有效性。
{"title":"CellScout: Visual Analytics for Mining Biomarkers in Cell State Discovery.","authors":"Rui Sheng, Zelin Zang, Jiachen Wang, Yan Luo, Zixin Chen, Yan Zhou, Shaolun Ruan, Huamin Qu","doi":"10.1109/TVCG.2025.3636102","DOIUrl":"10.1109/TVCG.2025.3636102","url":null,"abstract":"<p><p>Cell state discovery is crucial for understanding biological systems and enhancing medical outcomes. A key aspect of this process is identifying distinct biomarkers that define specific cell states. However, difficulties arise from the co-discovery process of cell states and biomarkers: biologists often use dimensionality reduction to visualize cells in a two-dimensional space. Then they usually interpret visually clustered cells as distinct states, from which they seek to identify unique biomarkers. However, this assumption is often this assumption often fails to hold due to internal inconsistencies in a cluster, making the process trial-and-error and highly uncertain. Therefore, biologists urgently need effective tools to help uncover the hidden association relationships between different cell populations and their potential biomarkers. To address this problem, we first designed a machine-learning algorithm based on the Mixture-of-Experts (MoE) technique to identify meaningful associations between cell populations and biomarkers. We further developed a visual analytics system-CellScout-in collaboration with biologists, to help them explore and refine these association relationships to advance cell state discovery. We validated our system through expert interviews, from which we further selected a representative case to demonstrate its effectiveness in discovering new cell states.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1497-1512"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145598446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1