首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
A Neural Field-Based Approach for View Computation & Data Exploration in 3D Urban Environments. 一种基于神经场的三维城市环境视图计算与数据探索方法。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3635528
Stefan Cobeli, Kazi Shahrukh Omar, Rodrigo Valenca, Nivan Ferreira, Fabio Miranda

Despite the growing availability of 3D urban datasets, extracting insights remains challenging due to computational bottlenecks and the complexity of interacting with data. In fact, the intricate geometry of 3D urban environments results in high degrees of occlusion and requires extensive manual viewpoint adjustments that make large-scale exploration inefficient. To address this, we propose a view-based approach for 3D data exploration, where a vector field encodes views from the environment. To support this approach, we introduce a neural field-based method that constructs an efficient implicit representation of 3D environments. This representation enables both faster direct queries, which consist of the computation of view assessment indices, and inverse queries, which help avoid occlusion and facilitate the search for views that match desired data patterns. Our approach supports key urban analysis tasks such as visibility assessments, solar exposure evaluation, and assessing the visual impact of new developments. We validate our method through quantitative experiments, case studies informed by real-world urban challenges, and feedback from domain experts. Results show its effectiveness in finding desirable viewpoints, analyzing building facade visibility, and evaluating views from outdoor spaces.

尽管3D城市数据集的可用性越来越高,但由于计算瓶颈和与数据交互的复杂性,提取见解仍然具有挑战性。事实上,3D城市环境的复杂几何结构导致高度遮挡,需要大量的手动视点调整,这使得大规模勘探效率低下。为了解决这个问题,我们提出了一种基于视图的3D数据探索方法,其中矢量场对来自环境的视图进行编码。为了支持这种方法,我们引入了一种基于神经场的方法,该方法构建了3D环境的有效隐式表示。这种表示既支持更快的直接查询(包含视图评估索引的计算),也支持反向查询(有助于避免遮挡并促进搜索与所需数据模式匹配的视图)。我们的方法支持关键的城市分析任务,如能见度评估、太阳照射评估和评估新开发项目的视觉影响。我们通过定量实验、基于现实城市挑战的案例研究以及领域专家的反馈来验证我们的方法。结果表明,它在寻找理想的视点、分析建筑立面可视性和评估室外空间的景观方面是有效的。代码和数据可在urbantk.org/neural-3d上公开获取。
{"title":"A Neural Field-Based Approach for View Computation & Data Exploration in 3D Urban Environments.","authors":"Stefan Cobeli, Kazi Shahrukh Omar, Rodrigo Valenca, Nivan Ferreira, Fabio Miranda","doi":"10.1109/TVCG.2025.3635528","DOIUrl":"10.1109/TVCG.2025.3635528","url":null,"abstract":"<p><p>Despite the growing availability of 3D urban datasets, extracting insights remains challenging due to computational bottlenecks and the complexity of interacting with data. In fact, the intricate geometry of 3D urban environments results in high degrees of occlusion and requires extensive manual viewpoint adjustments that make large-scale exploration inefficient. To address this, we propose a view-based approach for 3D data exploration, where a vector field encodes views from the environment. To support this approach, we introduce a neural field-based method that constructs an efficient implicit representation of 3D environments. This representation enables both faster direct queries, which consist of the computation of view assessment indices, and inverse queries, which help avoid occlusion and facilitate the search for views that match desired data patterns. Our approach supports key urban analysis tasks such as visibility assessments, solar exposure evaluation, and assessing the visual impact of new developments. We validate our method through quantitative experiments, case studies informed by real-world urban challenges, and feedback from domain experts. Results show its effectiveness in finding desirable viewpoints, analyzing building facade visibility, and evaluating views from outdoor spaces.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1540-1553"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145575099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Parameters for Static Equilibrium of Discrete Elastic Rods With Active-Set Cholesky. 离散弹性杆静力平衡参数的主动集choolesky优化。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3622483
Tetsuya Takahashi, Christopher Batty

We propose a parameter optimization method for achieving static equilibrium of discrete elastic rods. Our method simultaneously optimizes material stiffness and rest shape parameters under box constraints to exactly enforce zero net forces while avoiding stability issues and violations of physical laws. For efficiency, we split our constrained optimization problem into primal and dual subproblems via the augmented Lagrangian method, while handling the dual maximization subproblem via simple vector updates. To efficiently solve the box-constrained primal minimization subproblem, we propose a new active-set Cholesky preconditioner for variants of conjugate gradient solvers with active sets. Our method surpasses prior work in generality, robustness, and speed.

提出了一种离散弹性杆静力平衡的参数优化方法。我们的方法在盒子约束下同时优化材料刚度和静止形状参数,以精确地执行零净力,同时避免稳定性问题和违反物理定律。为了提高效率,我们通过增广拉格朗日方法将约束优化问题分解为原始子问题和对偶子问题,同时通过简单的向量更新处理对偶最大化子问题。为了有效地求解盒约束的原始最小化子问题,我们提出了一种新的具有活动集的共轭梯度解的变体的活动集Cholesky预条件。我们的方法在通用性、鲁棒性和速度上超越了先前的工作。
{"title":"Optimizing Parameters for Static Equilibrium of Discrete Elastic Rods With Active-Set Cholesky.","authors":"Tetsuya Takahashi, Christopher Batty","doi":"10.1109/TVCG.2025.3622483","DOIUrl":"10.1109/TVCG.2025.3622483","url":null,"abstract":"<p><p>We propose a parameter optimization method for achieving static equilibrium of discrete elastic rods. Our method simultaneously optimizes material stiffness and rest shape parameters under box constraints to exactly enforce zero net forces while avoiding stability issues and violations of physical laws. For efficiency, we split our constrained optimization problem into primal and dual subproblems via the augmented Lagrangian method, while handling the dual maximization subproblem via simple vector updates. To efficiently solve the box-constrained primal minimization subproblem, we propose a new active-set Cholesky preconditioner for variants of conjugate gradient solvers with active sets. Our method surpasses prior work in generality, robustness, and speed.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1951-1962"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145310348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reevaluating the Gaze Cursor in Virtual Reality: A Comparative Analysis of Cursor Visibility, Confirmation Mechanisms, and Task Paradigms. 虚拟现实中注视光标的再评估:光标可见性、确认机制和任务范式的比较分析。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3622042
Yushi Wei, Rongkai Shi, Sen Zhang, Anil Ufuk Batmaz, Pan Hui, Hai-Ning Liang

Cursors and how they are presented significantly influence user experience in both VR and non-VR environments by shaping how users interact with and perceive interfaces. In traditional interfaces, cursors serve as a fundamental component for translating human movement into digital interactions, enhancing interaction accuracy, efficiency, and experience. The design and visibility of cursors can affect users' ability to locate interactive elements and understand system feedback. In VR, cursor manipulation is more complex than in non-VR environments, as it can be controlled through hand, head, and gaze movements. With the arrival of the Apple Vision Pro, the use of gaze-controlled non-visible cursors has gained some prominence. However, there has been limited exploration of the effect of this type of cursor. This work presents a comprehensive study of the effects of cursor visibility (visible versus invisible) in gaze-based interactions within VR environments. Through two user studies, we investigate how cursor visibility impacts user performance and experience across different confirmation mechanisms and tasks. The first study focuses on selection tasks, examining the influence of target width, movement amplitude, and three common confirmation methods (air tap, blinking, and dwell). The second study explores pursuit tasks, analyzing cursor effects under varying movement speeds. Our findings reveal that cursor visibility significantly affects both objective performance metrics and subjective user preferences, but these effects vary depending on the confirmation mechanism used and task type. We propose eight design implications based on our empirical results to guide the future development of gaze-based interfaces in VR. These insights highlight the importance of tailoring cursor metaphors to specific interaction tasks and provide practical guidance for researchers and developers in optimizing VR user interfaces.

光标及其呈现方式通过塑造用户与界面交互和感知界面的方式,在VR和非VR环境中显著影响用户体验。在传统的界面中,光标是将人类动作转化为数字交互的基本组件,可以提高交互的准确性、效率和体验。游标的设计和可见性会影响用户定位交互元素和理解系统反馈的能力。在VR中,光标操作比在非VR环境中更复杂,因为它可以通过手、头和目光运动来控制。随着Apple Vision Pro的问世,使用由眼球控制的不可见光标得到了一定的重视。然而,对于这种类型的光标的效果的探索是有限的。这项工作对光标可见性(可见与不可见)在VR环境中基于凝视的交互中的影响进行了全面研究。通过两项用户研究,我们研究了光标可见性如何影响用户在不同确认机制和任务中的性能和体验。第一项研究侧重于选择任务,考察了目标宽度、运动幅度和三种常见的确认方法(轻拍、眨眼和停留)的影响。第二项研究探讨了追踪任务,分析了光标在不同移动速度下的效果。我们的研究结果表明,光标可见性显著影响客观性能指标和主观用户偏好,但这些影响取决于所使用的确认机制和任务类型。根据我们的实证结果,我们提出了八个设计启示,以指导VR中基于凝视的界面的未来发展。这些见解强调了针对特定交互任务定制光标隐喻的重要性,并为研究人员和开发人员优化VR用户界面提供了实用指导。
{"title":"Reevaluating the Gaze Cursor in Virtual Reality: A Comparative Analysis of Cursor Visibility, Confirmation Mechanisms, and Task Paradigms.","authors":"Yushi Wei, Rongkai Shi, Sen Zhang, Anil Ufuk Batmaz, Pan Hui, Hai-Ning Liang","doi":"10.1109/TVCG.2025.3622042","DOIUrl":"10.1109/TVCG.2025.3622042","url":null,"abstract":"<p><p>Cursors and how they are presented significantly influence user experience in both VR and non-VR environments by shaping how users interact with and perceive interfaces. In traditional interfaces, cursors serve as a fundamental component for translating human movement into digital interactions, enhancing interaction accuracy, efficiency, and experience. The design and visibility of cursors can affect users' ability to locate interactive elements and understand system feedback. In VR, cursor manipulation is more complex than in non-VR environments, as it can be controlled through hand, head, and gaze movements. With the arrival of the Apple Vision Pro, the use of gaze-controlled non-visible cursors has gained some prominence. However, there has been limited exploration of the effect of this type of cursor. This work presents a comprehensive study of the effects of cursor visibility (visible versus invisible) in gaze-based interactions within VR environments. Through two user studies, we investigate how cursor visibility impacts user performance and experience across different confirmation mechanisms and tasks. The first study focuses on selection tasks, examining the influence of target width, movement amplitude, and three common confirmation methods (air tap, blinking, and dwell). The second study explores pursuit tasks, analyzing cursor effects under varying movement speeds. Our findings reveal that cursor visibility significantly affects both objective performance metrics and subjective user preferences, but these effects vary depending on the confirmation mechanism used and task type. We propose eight design implications based on our empirical results to guide the future development of gaze-based interfaces in VR. These insights highlight the importance of tailoring cursor metaphors to specific interaction tasks and provide practical guidance for researchers and developers in optimizing VR user interfaces.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1640-1655"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145305158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Bayesian Guided Spatial-, Angular- and Temporal-Consistent View Synthesis. 层次贝叶斯引导的空间、角度和时间一致视图合成。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3631702
Junyu Zhu, Hao Zhu, Sheng Wang, Zhan Ma, Xun Cao

Neural Radiance Fields (NeRF) have gained significant attention due to their precise reconstruction and rapid inference capabilities, making them highly promising for applications in virtual reality and gaming. However, extending NeRF's capabilities to dynamic scenes remains underexplored, particularly in ensuring consistent and coherent reconstructions across space, time, and viewing angles. To address this challenge, we propose Scale-NeRF, a novel approach that organizes the training of dynamic NeRFs as a progressive, scale-based refinement process, grounded in hierarchical Bayesian theory. Scale-NeRF begins by reconstructing the radiance fields using coarse, large-scale frames and iteratively refines them with progressively smaller-scale frames. This hierarchical strategy, combined with a corresponding sampling approach and a newly introduced structural loss, ensures consistency and integrity throughout the reconstruction process. Experiments on public datasets validate the superiority of Scale-NeRF over traditional methods, especially in terms of the proposed metrics evaluating spatial, angular, and temporal consistency. Furthermore, Scale-NeRF demonstrates excellent dynamic reconstruction capabilities with real-time rendering, offering a significant advancement for applications demanding both high fidelity and real-time performance.

神经辐射场(NeRF)由于其精确的重建和快速的推理能力而获得了极大的关注,使其在虚拟现实和游戏中的应用具有很大的前景。然而,将NeRF的能力扩展到动态场景仍然有待探索,特别是在确保跨空间、时间和视角的一致和连贯的重建方面。为了应对这一挑战,我们提出了Scale-NeRF,这是一种新颖的方法,将动态nerf的训练组织为一个渐进的、基于规模的细化过程,以层次贝叶斯理论为基础。Scale-NeRF首先使用粗的、大规模的帧重建辐射场,然后用逐渐小尺度的帧迭代地改进它们。这种分层策略与相应的采样方法和新引入的结构损失相结合,确保了整个重建过程的一致性和完整性。在公共数据集上的实验验证了Scale-NeRF相对于传统方法的优越性,特别是在评估空间、角度和时间一致性方面。此外,Scale-NeRF在实时渲染中展示了出色的动态重建能力,为要求高保真度和实时性能的应用提供了重大进步。
{"title":"Hierarchical Bayesian Guided Spatial-, Angular- and Temporal-Consistent View Synthesis.","authors":"Junyu Zhu, Hao Zhu, Sheng Wang, Zhan Ma, Xun Cao","doi":"10.1109/TVCG.2025.3631702","DOIUrl":"10.1109/TVCG.2025.3631702","url":null,"abstract":"<p><p>Neural Radiance Fields (NeRF) have gained significant attention due to their precise reconstruction and rapid inference capabilities, making them highly promising for applications in virtual reality and gaming. However, extending NeRF's capabilities to dynamic scenes remains underexplored, particularly in ensuring consistent and coherent reconstructions across space, time, and viewing angles. To address this challenge, we propose Scale-NeRF, a novel approach that organizes the training of dynamic NeRFs as a progressive, scale-based refinement process, grounded in hierarchical Bayesian theory. Scale-NeRF begins by reconstructing the radiance fields using coarse, large-scale frames and iteratively refines them with progressively smaller-scale frames. This hierarchical strategy, combined with a corresponding sampling approach and a newly introduced structural loss, ensures consistency and integrity throughout the reconstruction process. Experiments on public datasets validate the superiority of Scale-NeRF over traditional methods, especially in terms of the proposed metrics evaluating spatial, angular, and temporal consistency. Furthermore, Scale-NeRF demonstrates excellent dynamic reconstruction capabilities with real-time rendering, offering a significant advancement for applications demanding both high fidelity and real-time performance.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1438-1451"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145508619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Far is Too Far? The Trade-Off Between Selection Distance and Accuracy During Teleportation in Immersive Virtual Reality. 多远才算太远?沉浸式虚拟现实中隐形传态选择距离与精度的权衡
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3632345
Daniel Rupp, Tim Weissker, Matthias Wolwer, Torsten W Kuhlen, Daniel Zielasko

Target-selection-based teleportation is one of the most widely used and researched travel techniques in immersive virtual environments, requiring the user to specify a target location with a selection ray before being transported there. This work explores the influence of the maximum reach of the parabolic selection ray, modeled by different emission velocities of the projectile motion equation, and compares the resulting teleportation performance to a straight ray as the baseline. In a user study with 60 participants, we asked participants to teleport as far as possible while still remaining within accuracy constraints to understand how the theoretical implications of the projectile motion equation apply to a realistic VR use case. We found that a projectile emission velocity of $14 frac{m}{s}$14ms (resulting in a maximal reach of $text{21.52 m}$21.52m) offered the best trade-off between selection distance and accuracy, with an inferior performance of the straight ray. Our results demonstrate the necessity to carefully set and report the projectile emission velocity in future work, as it was shown to directly influence user-selected distance, selection errors, and controller height during selection.

基于目标选择的隐形传态是沉浸式虚拟环境中应用最广泛和研究最多的一种旅行技术,它要求用户在被传送到目标位置之前使用选择射线指定目标位置。这项工作探讨了抛物选择射线的最大到达范围的影响,通过抛物运动方程的不同发射速度建模,并将由此产生的隐形传态性能与直线射线作为基线进行了比较。在一项有60名参与者的用户研究中,我们要求参与者在保持精度限制的情况下尽可能地传送,以了解抛射运动方程的理论含义如何适用于现实的VR用例。我们发现,弹丸发射速度为$14 frac{m}{s}$(导致最大到达$21.52 m$)在选择距离和精度之间提供了最佳折衷,而直线射线的性能较差。我们的研究结果表明,在未来的工作中,有必要仔细设置和报告弹丸发射速度,因为它直接影响用户选择的距离、选择误差和选择过程中的控制器高度。
{"title":"How Far is Too Far? The Trade-Off Between Selection Distance and Accuracy During Teleportation in Immersive Virtual Reality.","authors":"Daniel Rupp, Tim Weissker, Matthias Wolwer, Torsten W Kuhlen, Daniel Zielasko","doi":"10.1109/TVCG.2025.3632345","DOIUrl":"10.1109/TVCG.2025.3632345","url":null,"abstract":"<p><p>Target-selection-based teleportation is one of the most widely used and researched travel techniques in immersive virtual environments, requiring the user to specify a target location with a selection ray before being transported there. This work explores the influence of the maximum reach of the parabolic selection ray, modeled by different emission velocities of the projectile motion equation, and compares the resulting teleportation performance to a straight ray as the baseline. In a user study with 60 participants, we asked participants to teleport as far as possible while still remaining within accuracy constraints to understand how the theoretical implications of the projectile motion equation apply to a realistic VR use case. We found that a projectile emission velocity of $14 frac{m}{s}$14ms (resulting in a maximal reach of $text{21.52 m}$21.52m) offered the best trade-off between selection distance and accuracy, with an inferior performance of the straight ray. Our results demonstrate the necessity to carefully set and report the projectile emission velocity in future work, as it was shown to directly influence user-selected distance, selection errors, and controller height during selection.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1864-1878"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145524848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Make the Fastest Faster: Importance Mask Synthesis for Interactive Volume Visualization Using Reconstruction Neural Networks. 使最快更快:使用重建神经网络进行交互式体可视化的重要性掩模合成。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3621079
Jianxin Sun, David Lenz, Hongfeng Yu, Tom Peterka

Visualizing a large-scale volumetric dataset with high resolution is challenging due to the substantial computational time and space complexity. Recent deep learning-based image inpainting methods significantly improve rendering latency by reconstructing a high-resolution image for visualization in constant time on GPU from a partially rendered image where only a portion of pixels go through the expensive rendering pipeline. However, existing solutions need to render every pixel of either a predefined regular sampling pattern or an irregular sample pattern predicted from a low-resolution image rendering. Both methods require a significant amount of expensive pixel-level rendering. In this work, we provide Importance Mask Learning (IML) and Synthesis (IMS) networks, which are the first attempts to directly synthesize important regions of the regular sampling pattern from the user's view parameters, to further minimize the number of pixels to render by jointly considering the dataset, user behavior, and the downstream reconstruction neural network. Our solution is a unified framework to handle various types of inpainting methods through the proposed differentiable compaction/decompaction layers. Experiments show our method can further improve the overall rendering latency of state-of-the-art volume visualization methods using reconstruction neural network for free when rendering scientific volumetric datasets. Our method can also directly optimize the off-the-shelf pre-trained reconstruction neural networks without elongated retraining.

由于大量的计算时间和空间复杂性,高分辨率的大规模体积数据集的可视化具有挑战性。最近基于深度学习的图像绘制方法通过在GPU上以恒定时间从部分渲染的图像重建高分辨率图像来显着改善渲染延迟,其中只有一部分像素通过昂贵的渲染管道。然而,现有的解决方案需要呈现预定义的规则采样模式或从低分辨率图像呈现预测的不规则采样模式的每个像素。这两种方法都需要大量昂贵的像素级渲染。在这项工作中,我们提供了重要性掩码学习(IML)和综合(IMS)网络,这是第一次尝试直接从用户的视图参数中合成规则采样模式的重要区域,通过联合考虑数据集、用户行为和下游重建神经网络,进一步减少要渲染的像素数量。我们的解决方案是一个统一的框架,通过提出的可微分压缩/分解层来处理各种类型的绘制方法。实验表明,该方法可以进一步改善基于重建神经网络的最先进的体可视化方法在绘制科学体数据集时的整体渲染延迟。我们的方法也可以直接优化现成的预训练重建神经网络,而不需要长时间的再训练。
{"title":"Make the Fastest Faster: Importance Mask Synthesis for Interactive Volume Visualization Using Reconstruction Neural Networks.","authors":"Jianxin Sun, David Lenz, Hongfeng Yu, Tom Peterka","doi":"10.1109/TVCG.2025.3621079","DOIUrl":"10.1109/TVCG.2025.3621079","url":null,"abstract":"<p><p>Visualizing a large-scale volumetric dataset with high resolution is challenging due to the substantial computational time and space complexity. Recent deep learning-based image inpainting methods significantly improve rendering latency by reconstructing a high-resolution image for visualization in constant time on GPU from a partially rendered image where only a portion of pixels go through the expensive rendering pipeline. However, existing solutions need to render every pixel of either a predefined regular sampling pattern or an irregular sample pattern predicted from a low-resolution image rendering. Both methods require a significant amount of expensive pixel-level rendering. In this work, we provide Importance Mask Learning (IML) and Synthesis (IMS) networks, which are the first attempts to directly synthesize important regions of the regular sampling pattern from the user's view parameters, to further minimize the number of pixels to render by jointly considering the dataset, user behavior, and the downstream reconstruction neural network. Our solution is a unified framework to handle various types of inpainting methods through the proposed differentiable compaction/decompaction layers. Experiments show our method can further improve the overall rendering latency of state-of-the-art volume visualization methods using reconstruction neural network for free when rendering scientific volumetric datasets. Our method can also directly optimize the off-the-shelf pre-trained reconstruction neural networks without elongated retraining.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1481-1496"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145288006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analytical Texture Mapping. 分析纹理映射。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3611315
Koen Meinds, Elmar Eisemann

Resampling of warped images has been a topic of research for a long time but only seldomly has focused on theoretically exact resampling. We present a resampling method for minification, applied on the texture mapping function of a 3D graphics pipeline, that is derived from sampling theory without making any approximations. Our method supports freely selectable 2D integratable prefilter (anti-aliasing) functions and uses a 2D box reconstruction filter. We have implemented our method both for CPU and GPU (OpenGL) using multiple prefilter functions defined by piece-wise polynomials. The correctness of our exact resampling method has been made plausible by comparing texture mapping results of our method with those of extreme supersampling. We additionally show how the prefilter of our method can also be applied for high quality polygon edge anti-aliasing. Since our proposed method does not use any approximations, up to numerical precision, it can be used as a reference for approximate texture mapping methods.

畸变图像的重采样是一个长期以来的研究课题,但很少关注理论上精确的重采样。本文提出了一种基于采样理论的最小化重采样方法,应用于三维图形管道的纹理映射函数,该方法不做任何近似。我们的方法支持可自由选择的二维可积预滤波器(抗混叠)函数,并使用二维盒重构滤波器。我们使用由分段多项式定义的多个预滤波函数实现了CPU和GPU (OpenGL)的方法。将精确重采样方法的纹理映射结果与极端超采样方法的纹理映射结果进行比较,验证了精确重采样方法的正确性。我们还展示了我们的方法的预滤波器也可以应用于高质量的多边形边缘抗混叠。由于该方法不使用任何近似,达到数值精度,可以作为近似纹理映射方法的参考。
{"title":"Analytical Texture Mapping.","authors":"Koen Meinds, Elmar Eisemann","doi":"10.1109/TVCG.2025.3611315","DOIUrl":"10.1109/TVCG.2025.3611315","url":null,"abstract":"<p><p>Resampling of warped images has been a topic of research for a long time but only seldomly has focused on theoretically exact resampling. We present a resampling method for minification, applied on the texture mapping function of a 3D graphics pipeline, that is derived from sampling theory without making any approximations. Our method supports freely selectable 2D integratable prefilter (anti-aliasing) functions and uses a 2D box reconstruction filter. We have implemented our method both for CPU and GPU (OpenGL) using multiple prefilter functions defined by piece-wise polynomials. The correctness of our exact resampling method has been made plausible by comparing texture mapping results of our method with those of extreme supersampling. We additionally show how the prefilter of our method can also be applied for high quality polygon edge anti-aliasing. Since our proposed method does not use any approximations, up to numerical precision, it can be used as a reference for approximate texture mapping methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1941-1950"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expanding Access to Science Participation: A FAIR Framework for Petascale Data Visualization and Analytics. 扩大科学参与:千兆级数据可视化和分析的公平框架。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3642878
Aashish Panta, Alper Sahistan, Xuan Huang, Amy A Gooch, Giorgio Scorzelli, Hector Torres, Patrice Klein, Gustavo A Ovando-Montejo, Peter Lindstrom, Valerio Pascucci

The massive data generated by scientists daily serve as both a major catalyst for new discoveries and innovations, as well as a significant roadblock that restricts access to the data. Our paper introduces a new approach to removing Big Data barriers and democratizing access to petascale data for the broader scientific community. Our novel data fabric abstraction layer allows user-friendly querying of scientific information while hiding the complexities of dealing with file systems or cloud services. We enable FAIR (Findable, Accessible, Interoperable, and Reusable) access to datasets such as NASA's petascale climate datasets. Our paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our novel data fabric abstraction utilizes state-of-the art progressive compression algorithms and machine-learning insights to power scalable visualization dashboards for petascale data. The result provides users with the ability to identify extreme events or trends dynamically, expanding access to scientific data and further enabling discoveries. We validate our approach by improving the ability of climate scientists to visually explore their data via three fully interactive dashboards. We further validate our approach by deploying the dashboards and simplified training materials in the classroom at a minority-serving institution. These dashboards, released in simplified form to the general public, contribute significantly to a broader push to democratize the access and use of climate data.

科学家每天产生的大量数据既是新发现和创新的主要催化剂,也是限制获取数据的重要障碍。我们的论文介绍了一种新的方法来消除大数据障碍,并为更广泛的科学界民主化访问千兆级数据。我们新颖的数据结构抽象层允许用户友好地查询科学信息,同时隐藏了处理文件系统或云服务的复杂性。我们允许FAIR(可查找、可访问、可互操作和可重用)访问数据集,例如NASA的千万亿次气候数据集。我们的论文提出了一种管理、可视化和分析浏览器中pb级数据的方法,这些设备从美国国家航空航天局(NASA)的顶级超级计算机到笔记本电脑等普通硬件。我们新颖的数据结构抽象利用最先进的渐进式压缩算法和机器学习洞察力,为千兆级数据提供可扩展的可视化仪表板。结果为用户提供了动态识别极端事件或趋势的能力,扩大了对科学数据的访问,并进一步实现了发现。我们通过提高气候科学家通过三个完全互动的仪表板直观地探索数据的能力来验证我们的方法。我们通过在少数族裔服务机构的课堂上部署仪表板和简化的培训材料,进一步验证了我们的方法。这些以简化形式向公众发布的仪表板,对更广泛地推动气候数据的获取和使用民主化作出了重大贡献。
{"title":"Expanding Access to Science Participation: A FAIR Framework for Petascale Data Visualization and Analytics.","authors":"Aashish Panta, Alper Sahistan, Xuan Huang, Amy A Gooch, Giorgio Scorzelli, Hector Torres, Patrice Klein, Gustavo A Ovando-Montejo, Peter Lindstrom, Valerio Pascucci","doi":"10.1109/TVCG.2025.3642878","DOIUrl":"10.1109/TVCG.2025.3642878","url":null,"abstract":"<p><p>The massive data generated by scientists daily serve as both a major catalyst for new discoveries and innovations, as well as a significant roadblock that restricts access to the data. Our paper introduces a new approach to removing Big Data barriers and democratizing access to petascale data for the broader scientific community. Our novel data fabric abstraction layer allows user-friendly querying of scientific information while hiding the complexities of dealing with file systems or cloud services. We enable FAIR (Findable, Accessible, Interoperable, and Reusable) access to datasets such as NASA's petascale climate datasets. Our paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our novel data fabric abstraction utilizes state-of-the art progressive compression algorithms and machine-learning insights to power scalable visualization dashboards for petascale data. The result provides users with the ability to identify extreme events or trends dynamically, expanding access to scientific data and further enabling discoveries. We validate our approach by improving the ability of climate scientists to visually explore their data via three fully interactive dashboards. We further validate our approach by deploying the dashboards and simplified training materials in the classroom at a minority-serving institution. These dashboards, released in simplified form to the general public, contribute significantly to a broader push to democratize the access and use of climate data.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1806-1821"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deterministic Point Cloud Diffusion for Denoising. 确定性点云扩散去噪。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3621633
Zheng Liu, Zhenyu Huang, Maodong Pan, Ying He

Diffusion-based generative models have achieved remarkable success in image restoration by learning to iteratively refine noisy data toward clean signals. Inspired by this progress, recent efforts have begun exploring their potential in 3D domains. However, applying diffusion models to point cloud denoising introduces several challenges. Unlike images, clean and noisy point clouds are characterized by structured displacements. As a result, it is unsuitable to establish a transform mapping in the forward phase by diffusing Gaussian noise, as this approach disregards the inherent geometric relationship between the point sets. Furthermore, the stochastic nature of Gaussian noise introduces additional complexity, complicating geometric reasoning and hindering surface recovery during the reverse denoising process. In this paper, we introduce a deterministic noise-free diffusion framework that formulates point cloud denoising as a two-phase residual diffusion process. In the forward phase, directional residuals are injected into clean surfaces to construct a degradation trajectory that encodes both local displacements and their global evolution. In the reverse phase, a U-Net-based network iteratively estimates and removes these residuals, effectively retracing the degradation path backward to recover the underlying surface. By decomposing the denoising task into directional residual computation and sequential refinement, our method enables faithful surface recovery while mitigating common artifacts such as over-smoothing and under-smoothing. Extensive experiments on synthetic and real-world datasets demonstrate that our method achieves state-of-the-art performance in both quantitative metrics and visual quality.

基于扩散的生成模型通过学习迭代地将噪声数据细化为干净信号,在图像恢复中取得了显著的成功。受到这一进展的启发,最近的努力已经开始探索它们在3D领域的潜力。然而,将扩散模型应用于点云去噪会带来一些挑战。与图像不同,干净和嘈杂的点云具有结构化位移的特征。因此,不考虑点集之间固有的几何关系,不适合通过扩散高斯噪声的方法在前向建立变换映射。此外,高斯噪声的随机性引入了额外的复杂性,使几何推理复杂化,并阻碍了反向去噪过程中的表面恢复。在本文中,我们引入了一个确定性的无噪声扩散框架,该框架将点云去噪描述为一个两阶段的残余扩散过程。在正向阶段,将定向残差注入清洁表面,以构建一个降解轨迹,该轨迹既编码局部位移,也编码其全局演化。在反向阶段,基于u - net的网络迭代估计并去除这些残差,有效地回溯退化路径以恢复下表面。通过将去噪任务分解为方向残差计算和顺序细化,我们的方法能够忠实地恢复表面,同时减轻常见的伪影,如过平滑和欠平滑。在合成和真实世界数据集上进行的大量实验表明,我们的方法在定量指标和视觉质量方面都达到了最先进的性能。我们的源代码可从https://github.com/huangzygiti/DPCD获得。
{"title":"Deterministic Point Cloud Diffusion for Denoising.","authors":"Zheng Liu, Zhenyu Huang, Maodong Pan, Ying He","doi":"10.1109/TVCG.2025.3621633","DOIUrl":"10.1109/TVCG.2025.3621633","url":null,"abstract":"<p><p>Diffusion-based generative models have achieved remarkable success in image restoration by learning to iteratively refine noisy data toward clean signals. Inspired by this progress, recent efforts have begun exploring their potential in 3D domains. However, applying diffusion models to point cloud denoising introduces several challenges. Unlike images, clean and noisy point clouds are characterized by structured displacements. As a result, it is unsuitable to establish a transform mapping in the forward phase by diffusing Gaussian noise, as this approach disregards the inherent geometric relationship between the point sets. Furthermore, the stochastic nature of Gaussian noise introduces additional complexity, complicating geometric reasoning and hindering surface recovery during the reverse denoising process. In this paper, we introduce a deterministic noise-free diffusion framework that formulates point cloud denoising as a two-phase residual diffusion process. In the forward phase, directional residuals are injected into clean surfaces to construct a degradation trajectory that encodes both local displacements and their global evolution. In the reverse phase, a U-Net-based network iteratively estimates and removes these residuals, effectively retracing the degradation path backward to recover the underlying surface. By decomposing the denoising task into directional residual computation and sequential refinement, our method enables faithful surface recovery while mitigating common artifacts such as over-smoothing and under-smoothing. Extensive experiments on synthetic and real-world datasets demonstrate that our method achieves state-of-the-art performance in both quantitative metrics and visual quality.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1822-1834"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145310424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reimagining Disassembly Interfaces With Visualization: Combining Instruction Tracing and Control Flow With DisViz. 用可视化重新构想反汇编接口:指令跟踪和控制流与DisViz的结合。
IF 6.5 Pub Date : 2026-02-01 DOI: 10.1109/TVCG.2025.3627171
Shadmaan Hye, Matthew P LeGendre, Katherine E Isaacs

In applications where efficiency is critical, developers may examine their compiled binaries, seeking to understand how the compiler transformed their source code and what performance implications that transformation may have. This analysis is challenging due to the vast number of disassembled binary instructions and the many-to-many mappings between them and the source code. These problems are exacerbated as source code size increases, giving the compiler more freedom to map and disperse binary instructions across the disassembly space. Interfaces for disassembly typically display instructions as an unstructured listing or sacrifice the order of execution. We design a new visual interface for disassembly code that combines execution order with control flow structure, enabling analysts to both trace through code and identify familiar aspects of the computation. Central to our approach is a novel layout of instructions grouped into basic blocks that displays a looping structure in an intuitive way. We add to this disassembly representation a unique block-based mini-map that leverages our layout and shows context across thousands of disassembly instructions. Finally, we embed our disassembly visualization in a web-based tool, DisViz, which adds dynamic linking with source code across the entire application. DizViz was developed in collaboration with program analysis experts following design study methodology and was validated through evaluation sessions with ten participants from four institutions. Participants successfully completed the evaluation tasks, hypothesized about compiler optimizations, and noted the utility of our new disassembly view. Our evaluation suggests that our new integrated view helps application developers in understanding and navigating disassembly code.

在效率至关重要的应用程序中,开发人员可能会检查已编译的二进制文件,试图了解编译器是如何转换源代码的,以及这种转换可能带来的性能影响。由于大量反汇编的二进制指令以及它们与源代码之间的多对多映射,这种分析具有挑战性。这些问题随着源代码大小的增加而加剧,这给了编译器在反汇编空间中映射和分散二进制指令更多的自由。反汇编接口通常将指令显示为非结构化列表或牺牲执行顺序。我们为反汇编代码设计了一个新的可视化界面,它结合了执行顺序和控制流结构,使分析人员既可以跟踪代码,又可以识别计算的熟悉方面。我们方法的核心是一种新颖的指令布局,将指令分组到基本块中,以直观的方式显示循环结构。我们为这个拆卸表示添加了一个独特的基于块的迷你地图,它利用我们的布局并显示数千个拆卸指令的上下文。最后,我们将反汇编可视化嵌入到基于web的工具DisViz中,该工具在整个应用程序中添加了与源代码的动态链接。DizViz是与项目分析专家根据设计研究方法合作开发的,并通过来自四个机构的10名参与者的评估会议进行了验证。参与者成功地完成了评估任务,对编译器优化进行了假设,并注意到我们的新反汇编视图的实用性。我们的评估表明,我们新的集成视图有助于应用程序开发人员理解和导航反汇编代码。
{"title":"Reimagining Disassembly Interfaces With Visualization: Combining Instruction Tracing and Control Flow With DisViz.","authors":"Shadmaan Hye, Matthew P LeGendre, Katherine E Isaacs","doi":"10.1109/TVCG.2025.3627171","DOIUrl":"10.1109/TVCG.2025.3627171","url":null,"abstract":"<p><p>In applications where efficiency is critical, developers may examine their compiled binaries, seeking to understand how the compiler transformed their source code and what performance implications that transformation may have. This analysis is challenging due to the vast number of disassembled binary instructions and the many-to-many mappings between them and the source code. These problems are exacerbated as source code size increases, giving the compiler more freedom to map and disperse binary instructions across the disassembly space. Interfaces for disassembly typically display instructions as an unstructured listing or sacrifice the order of execution. We design a new visual interface for disassembly code that combines execution order with control flow structure, enabling analysts to both trace through code and identify familiar aspects of the computation. Central to our approach is a novel layout of instructions grouped into basic blocks that displays a looping structure in an intuitive way. We add to this disassembly representation a unique block-based mini-map that leverages our layout and shows context across thousands of disassembly instructions. Finally, we embed our disassembly visualization in a web-based tool, DisViz, which adds dynamic linking with source code across the entire application. DizViz was developed in collaboration with program analysis experts following design study methodology and was validated through evaluation sessions with ten participants from four institutions. Participants successfully completed the evaluation tasks, hypothesized about compiler optimizations, and noted the utility of our new disassembly view. Our evaluation suggests that our new integrated view helps application developers in understanding and navigating disassembly code.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":"1729-1742"},"PeriodicalIF":6.5,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1