首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
STGlight: Online Indoor Lighting Estimation via Spatio-Temporal Gaussian Fusion stlight:基于时空高斯融合的室内照明在线估计
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763350
Shiyuan Shen, Zhongyun Bao, Hong Ding, Wenju Xu, Tenghui Lai, Chunxia Xiao
Estimating lighting in indoor scenes is particularly challenging due to diverse distribution of light sources and complexity of scene geometry. Previous methods mainly focused on spatial variability and consistency for a single image or temporal consistency for video sequences. However, these approaches fail to achieve spatio-temporal consistency in video lighting estimation, which restricts applications such as compositing animated models into videos. In this paper, we propose STGlight, a lightweight and effective method for spatio-temporally consistent video lighting estimation, where our network processes a stream of LDR RGB-D video frames while maintaining incrementally updated global representations of both geometry and lighting, enabling the prediction of HDR environment maps at arbitrary locations for each frame. We model indoor lighting with three components: visible light sources providing direct illumination, ambient lighting approximating indirect illumination, and local environment textures producing high-quality specular reflections on glossy objects. To capture spatial-varying lighting, we represent scene geometry with point clouds, which support efficient spatio-temporal fusion and allow us to handle moderately dynamic scenes. To ensure temporal consistency, we apply a transformer-based fusion block that propagates lighting features across frames. Building on this, we further handle dynamic lighting with moving objects or changing light conditions by applying intrinsic decomposition on the point cloud and integrating the decomposed components with a neural fusion module. Experiments show that our online method can effectively predict lighting for any position within the video stream, while maintaining spatial variability and spatio-temporal consistency. Code is available at: https://github.com/nauyihsnehs/STGlight.
由于光源分布的多样性和场景几何的复杂性,室内场景的照明估计尤其具有挑战性。以往的方法主要关注单幅图像的空间变异性和一致性,或视频序列的时间一致性。然而,这些方法无法实现视频照明估计的时空一致性,这限制了将动画模型合成到视频中的应用。在本文中,我们提出了stlight,一种用于时空一致视频照明估计的轻量级有效方法,其中我们的网络处理LDR RGB-D视频帧流,同时保持几何和照明的增量更新全局表示,从而能够预测每帧任意位置的HDR环境地图。我们用三个组件来模拟室内照明:提供直接照明的可见光源,近似间接照明的环境照明,以及在光滑物体上产生高质量镜面反射的局部环境纹理。为了捕捉空间变化的照明,我们用点云表示场景几何,这支持有效的时空融合,并允许我们处理适度动态的场景。为了确保时间一致性,我们应用了一个基于变压器的融合块,它在帧之间传播照明特征。在此基础上,我们通过在点云上应用内在分解并将分解的组件与神经融合模块集成,进一步处理带有移动物体或改变光照条件的动态照明。实验表明,该方法可以有效地预测视频流中任何位置的光照,同时保持空间可变性和时空一致性。代码可从https://github.com/nauyihsnehs/STGlight获得。
{"title":"STGlight: Online Indoor Lighting Estimation via Spatio-Temporal Gaussian Fusion","authors":"Shiyuan Shen, Zhongyun Bao, Hong Ding, Wenju Xu, Tenghui Lai, Chunxia Xiao","doi":"10.1145/3763350","DOIUrl":"https://doi.org/10.1145/3763350","url":null,"abstract":"Estimating lighting in indoor scenes is particularly challenging due to diverse distribution of light sources and complexity of scene geometry. Previous methods mainly focused on spatial variability and consistency for a single image or temporal consistency for video sequences. However, these approaches fail to achieve spatio-temporal consistency in video lighting estimation, which restricts applications such as compositing animated models into videos. In this paper, we propose STGlight, a lightweight and effective method for spatio-temporally consistent video lighting estimation, where our network processes a stream of LDR RGB-D video frames while maintaining incrementally updated global representations of both geometry and lighting, enabling the prediction of HDR environment maps at arbitrary locations for each frame. We model indoor lighting with three components: visible light sources providing direct illumination, ambient lighting approximating indirect illumination, and local environment textures producing high-quality specular reflections on glossy objects. To capture spatial-varying lighting, we represent scene geometry with point clouds, which support efficient spatio-temporal fusion and allow us to handle moderately dynamic scenes. To ensure temporal consistency, we apply a transformer-based fusion block that propagates lighting features across frames. Building on this, we further handle dynamic lighting with moving objects or changing light conditions by applying intrinsic decomposition on the point cloud and integrating the decomposed components with a neural fusion module. Experiments show that our online method can effectively predict lighting for any position within the video stream, while maintaining spatial variability and spatio-temporal consistency. Code is available at: https://github.com/nauyihsnehs/STGlight.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"115 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Highly-Efficient Hybrid Simulation System for Flight Controller Design and Evaluation of Unmanned Aerial Vehicles 面向无人机飞行控制器设计与评估的高效混合仿真系统
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763283
Jiwei Wang, Wenbin Song, Yicheng Fan, Yang Wang, Xiaopei Liu
Unmanned aerial vehicles (UAVs) have demonstrated remarkable efficacy across diverse fields. Nevertheless, developing flight controllers tailored to a specific UAV design, particularly in environments with strong fluid-interactive dynamics, remains challenging. Conventional controller design experiences often fall short in such cases, rendering it infeasible to apply time-tested practices. Consequently, a simulation test bed becomes indispensable for controller design and evaluation prior to its actual implementation on the physical UAV. This platform should allow for meticulous adjustment of controllers and should be able to transfer to real-world systems without significant performance degradation. Existing simulators predominantly hinge on empirical models due to high efficiency, often overlooking the dynamic interplay between the UAV and the surrounding airflow. This makes it difficult to mimic more complex flight maneuvers, such as an abrupt midair halt inside narrow channels, in which the UAV may experience strong fluid-structure interactions. On the other hand, simulators considering the complex surrounding airflow are extremely slow and inadequate to support the design and evaluation of flight controllers. In this paper, we present a novel remedy for highly-efficient UAV flight simulations, which entails a hybrid modeling that deftly combines our novel far-field adaptive block-based fluid simulator with parametric empirical models situated near the boundary of the UAV, with the model parameters automatically calibrated. With this newly devised simulator, a broader spectrum of flight scenarios can be explored for controller design and assessment, encompassing those influenced by potent close-proximity effects, or situations where multiple UAVs operate in close quarters. The practical worth of our simulator has been authenticated through comparisons with actual UAV flight data. We further showcase its utility in designing flight controllers for fixed-wing, multi-rotor, and hybrid UAVs, and even exemplify its application when multiple UAVs are involved, underlining the unique value of our system for flight controllers.
无人驾驶飞行器(uav)在各个领域都显示出显着的功效。然而,开发针对特定无人机设计的飞行控制器,特别是在具有强流体交互动力学的环境中,仍然具有挑战性。在这种情况下,传统的控制器设计经验往往不足,使得应用经过时间考验的实践变得不可行。因此,在物理无人机上实际实现控制器之前,仿真试验台成为控制器设计和评估不可或缺的一部分。这个平台应该允许对控制器进行细致的调整,并且应该能够在没有显著性能下降的情况下转移到现实世界的系统。现有的仿真器由于效率高,主要依赖于经验模型,往往忽略了无人机与周围气流之间的动态相互作用。这使得模拟更复杂的飞行动作变得困难,比如在狭窄通道内突然在半空中停止,在这种情况下,无人机可能会经历强烈的流固相互作用。另一方面,考虑复杂周围气流的模拟器速度极慢,不足以支持飞行控制器的设计和评估。在本文中,我们提出了一种高效无人机飞行模拟的新方法,即将我们的新型远场自适应基于块的流体模拟器与位于无人机边界附近的参数经验模型巧妙地结合起来,并自动校准模型参数的混合建模。有了这个新设计的模拟器,可以探索更广泛的飞行场景,用于控制器的设计和评估,包括那些受强大的近距离效应影响的情况,或者多架无人机在近距离操作的情况。通过与实际无人机飞行数据的对比,验证了该模拟器的实用价值。我们进一步展示了它在设计固定翼、多旋翼和混合无人机的飞行控制器方面的实用性,甚至举例说明了它在涉及多架无人机时的应用,强调了我们的系统对飞行控制器的独特价值。
{"title":"A Highly-Efficient Hybrid Simulation System for Flight Controller Design and Evaluation of Unmanned Aerial Vehicles","authors":"Jiwei Wang, Wenbin Song, Yicheng Fan, Yang Wang, Xiaopei Liu","doi":"10.1145/3763283","DOIUrl":"https://doi.org/10.1145/3763283","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have demonstrated remarkable efficacy across diverse fields. Nevertheless, developing flight controllers tailored to a specific UAV design, particularly in environments with strong fluid-interactive dynamics, remains challenging. Conventional controller design experiences often fall short in such cases, rendering it infeasible to apply time-tested practices. Consequently, a simulation test bed becomes indispensable for controller design and evaluation prior to its actual implementation on the physical UAV. This platform should allow for meticulous adjustment of controllers and should be able to transfer to real-world systems without significant performance degradation. Existing simulators predominantly hinge on empirical models due to high efficiency, often overlooking the dynamic interplay between the UAV and the surrounding airflow. This makes it difficult to mimic more complex flight maneuvers, such as an abrupt midair halt inside narrow channels, in which the UAV may experience strong fluid-structure interactions. On the other hand, simulators considering the complex surrounding airflow are extremely slow and inadequate to support the design and evaluation of flight controllers. In this paper, we present a novel remedy for highly-efficient UAV flight simulations, which entails a hybrid modeling that deftly combines our novel far-field adaptive block-based fluid simulator with parametric empirical models situated near the boundary of the UAV, with the model parameters automatically calibrated. With this newly devised simulator, a broader spectrum of flight scenarios can be explored for controller design and assessment, encompassing those influenced by potent close-proximity effects, or situations where multiple UAVs operate in close quarters. The practical worth of our simulator has been authenticated through comparisons with actual UAV flight data. We further showcase its utility in designing flight controllers for fixed-wing, multi-rotor, and hybrid UAVs, and even exemplify its application when multiple UAVs are involved, underlining the unique value of our system for flight controllers.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"34 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrafast and Controllable Online Motion Retargeting for Game Scenarios 游戏场景的超快速可控在线运动重定向
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763351
Tianze Guo, Zhedong Chen, Yi Jiang, Linjun Wu, Xilei Wei, Lang Xu, Yeshuang Lin, He Wang, Xiaogang Jin
Geometry-aware online motion retargeting is crucial for real-time character animation in gaming and virtual reality. However, existing methods often rely on complex optimization procedures or deep neural networks, which constrain their applicability in real-time scenarios. Moreover, they offer limited control over fine-grained motion details involved in character interactions, resulting in less realistic outcomes. To overcome these limitations, we propose a novel optimization framework for ultrafast, lightweight motion retargeting with joint-level control (i.e., controls over joint position, bone orientation, etc,). Our approach introduces a semantic-aware objective grounded in a spherical geometry representation, coupled with a bone-length-preserving algorithm that iteratively solves this objective. This formulation preserves spatial relationships among spheres, thereby maintaining motion semantics, mitigating interpenetration, and ensuring contact. It is lightweight and computationally efficient, making it particularly suitable for time-critical real-time deployment scenarios. Additionally, we incorporate a heuristic optimization strategy that enables rapid convergence and precise joint-level control. We evaluate our method against state-of-the-art approaches on the Mixamo dataset, and experimental results demonstrate that it achieves comparable performance while delivering an order-of-magnitude speedup.
几何感知在线运动重定向对于游戏和虚拟现实中的实时角色动画至关重要。然而,现有的方法往往依赖于复杂的优化过程或深度神经网络,这限制了它们在实时场景中的适用性。此外,它们对涉及角色互动的细粒度运动细节的控制有限,导致结果不太真实。为了克服这些限制,我们提出了一种新的优化框架,用于通过关节水平控制(即控制关节位置,骨骼方向等)实现超快速,轻量级的运动重定向。我们的方法引入了基于球面几何表示的语义感知目标,以及迭代解决该目标的骨长度保持算法。这个公式保留了球体之间的空间关系,从而保持了运动语义,减轻了相互渗透并确保了接触。它轻量级且计算效率高,因此特别适合时间紧迫的实时部署场景。此外,我们还采用了一种启发式优化策略,使快速收敛和精确的关节水平控制成为可能。我们对Mixamo数据集上最先进的方法进行了评估,实验结果表明,它在提供数量级加速的同时达到了相当的性能。
{"title":"Ultrafast and Controllable Online Motion Retargeting for Game Scenarios","authors":"Tianze Guo, Zhedong Chen, Yi Jiang, Linjun Wu, Xilei Wei, Lang Xu, Yeshuang Lin, He Wang, Xiaogang Jin","doi":"10.1145/3763351","DOIUrl":"https://doi.org/10.1145/3763351","url":null,"abstract":"Geometry-aware online motion retargeting is crucial for real-time character animation in gaming and virtual reality. However, existing methods often rely on complex optimization procedures or deep neural networks, which constrain their applicability in real-time scenarios. Moreover, they offer limited control over fine-grained motion details involved in character interactions, resulting in less realistic outcomes. To overcome these limitations, we propose a novel optimization framework for ultrafast, lightweight motion retargeting with joint-level control (i.e., controls over joint position, bone orientation, etc,). Our approach introduces a semantic-aware objective grounded in a spherical geometry representation, coupled with a bone-length-preserving algorithm that iteratively solves this objective. This formulation preserves spatial relationships among spheres, thereby maintaining motion semantics, mitigating interpenetration, and ensuring contact. It is lightweight and computationally efficient, making it particularly suitable for time-critical real-time deployment scenarios. Additionally, we incorporate a heuristic optimization strategy that enables rapid convergence and precise joint-level control. We evaluate our method against state-of-the-art approaches on the Mixamo dataset, and experimental results demonstrate that it achieves comparable performance while delivering an order-of-magnitude speedup.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"10 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaussian Integral Linear Operators for Precomputed Graphics 预计算图形的高斯积分线性算子
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763321
Haolin Lu, Yash Belhe, Gurprit Singh, Tzu-Mao Li, Toshiya Hachisuka
Integral linear operators play a key role in many graphics problems, but solutions obtained via Monte Carlo methods often suffer from high variance. A common strategy to improve the efficiency of integration across various inputs is to precompute the kernel function. Traditional methods typically rely on basis expansions for both the input and output functions. However, using fixed output bases can restrict the precision of output reconstruction and limit the compactness of the kernel representation. In this work, we introduce a new method that approximates both the kernel and the input function using Gaussian mixtures. This formulation allows the integral operator to be evaluated analytically, leading to improved flexibility in kernel storage and output representation. Moreover, our method naturally supports the sequential application of multiple operators and enables closed-form operator composition, which is particularly beneficial in tasks involving chains of operators. We demonstrate the versatility and effectiveness of our approach across a variety of graphics problems, including environment map relighting, boundary value problems, and fluorescence rendering.
积分线性算子在许多图形问题中起着关键作用,但通过蒙特卡罗方法得到的解往往存在很大的方差。提高不同输入间积分效率的一种常用策略是预先计算核函数。传统方法通常依赖于输入和输出函数的基展开。然而,使用固定的输出基会限制输出重构的精度和核表示的紧凑性。在这项工作中,我们引入了一种使用高斯混合近似核函数和输入函数的新方法。这个公式允许对积分运算符进行分析计算,从而提高了核存储和输出表示的灵活性。此外,我们的方法自然地支持多个操作符的顺序应用,并支持封闭形式的操作符组合,这在涉及操作符链的任务中特别有益。我们展示了我们的方法在各种图形问题上的多功能性和有效性,包括环境地图重照明、边界值问题和荧光渲染。
{"title":"Gaussian Integral Linear Operators for Precomputed Graphics","authors":"Haolin Lu, Yash Belhe, Gurprit Singh, Tzu-Mao Li, Toshiya Hachisuka","doi":"10.1145/3763321","DOIUrl":"https://doi.org/10.1145/3763321","url":null,"abstract":"Integral linear operators play a key role in many graphics problems, but solutions obtained via Monte Carlo methods often suffer from high variance. A common strategy to improve the efficiency of integration across various inputs is to precompute the kernel function. Traditional methods typically rely on basis expansions for both the input and output functions. However, using fixed output bases can restrict the precision of output reconstruction and limit the compactness of the kernel representation. In this work, we introduce a new method that approximates both the kernel and the input function using Gaussian mixtures. This formulation allows the integral operator to be evaluated analytically, leading to improved flexibility in kernel storage and output representation. Moreover, our method naturally supports the sequential application of multiple operators and enables closed-form operator composition, which is particularly beneficial in tasks involving chains of operators. We demonstrate the versatility and effectiveness of our approach across a variety of graphics problems, including environment map relighting, boundary value problems, and fluorescence rendering.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"34 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Glare Pattern Depiction: High-Fidelity Physical Computation and Physiologically-Inspired Visual Response 眩光模式描述:高保真物理计算和生理启发的视觉反应
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763356
Yuxiang Sun, Gladimir V. G. Baranoski
When observing an intense light source, humans perceive dense radiating spikes known as glare/starburst patterns. These patterns are frequently used in computer graphics applications to enhance the perception of brightness (e.g., in games and films). Previous works have computed the physical energy distribution of glare patterns under daytime conditions using approximations like Fresnel diffraction. These techniques are capable of producing visually believable results, particularly when the pupil remains small. However, they are insufficient under nighttime conditions, when the pupil is significantly dilated and the assumptions behind the approximations no longer hold. To address this, we employ the Rayleigh-Sommerfeld diffraction solution, from which Fresnel diffraction is derived as an approximation, as our baseline reference. In pursuit of performance and visual quality, we also employ Ochoa's approximation and the Chirp Z transform to efficiently generate high-resolution results for computer graphics applications. By also taking into account background illumination and certain physiological characteristics of the human photoreceptor cells, particularly the visual threshold of light stimulus, we propose a framework capable of producing plausible visual depictions of glare patterns for both daytime and nighttime scenes.
当观察强光源时,人类感知到密集的辐射尖峰,称为眩光/星暴模式。这些图案经常用于计算机图形应用程序,以增强对亮度的感知(例如,在游戏和电影中)。以前的工作已经使用菲涅耳衍射等近似方法计算了白天条件下眩光模式的物理能量分布。这些技术能够产生视觉上可信的结果,特别是当瞳孔很小的时候。然而,它们在夜间条件下是不够的,当瞳孔显着扩大时,近似背后的假设不再成立。为了解决这个问题,我们采用瑞利-索默菲尔德衍射解决方案,从菲涅耳衍射作为近似推导,作为我们的基准参考。为了追求性能和视觉质量,我们还采用了Ochoa近似和Chirp Z变换来有效地为计算机图形应用程序生成高分辨率结果。同时考虑到背景照明和人类光感受器细胞的某些生理特征,特别是光刺激的视觉阈值,我们提出了一个框架,能够对白天和夜间场景的眩光模式产生合理的视觉描述。
{"title":"Glare Pattern Depiction: High-Fidelity Physical Computation and Physiologically-Inspired Visual Response","authors":"Yuxiang Sun, Gladimir V. G. Baranoski","doi":"10.1145/3763356","DOIUrl":"https://doi.org/10.1145/3763356","url":null,"abstract":"When observing an intense light source, humans perceive dense radiating spikes known as glare/starburst patterns. These patterns are frequently used in computer graphics applications to enhance the perception of brightness (e.g., in games and films). Previous works have computed the physical energy distribution of glare patterns under daytime conditions using approximations like Fresnel diffraction. These techniques are capable of producing visually believable results, particularly when the pupil remains small. However, they are insufficient under nighttime conditions, when the pupil is significantly dilated and the assumptions behind the approximations no longer hold. To address this, we employ the Rayleigh-Sommerfeld diffraction solution, from which Fresnel diffraction is derived as an approximation, as our baseline reference. In pursuit of performance and visual quality, we also employ Ochoa's approximation and the Chirp Z transform to efficiently generate high-resolution results for computer graphics applications. By also taking into account background illumination and certain physiological characteristics of the human photoreceptor cells, particularly the visual threshold of light stimulus, we propose a framework capable of producing plausible visual depictions of glare patterns for both daytime and nighttime scenes.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"155 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artifact-Resilient Real-Time Holography 人工制品弹性实时全息
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763361
Victor Chu, Oscar Pueyo-Ciutad, Ethan Tseng, Florian Schiffers, Grace Kuo, Nathan Matsuda, Alberto Redo-Sanchez, Douglas Lanman, Oliver Cossairt, Felix Heide
Holographic near-eye displays promise unparalleled depth cues, high-resolution imagery, and realistic three-dimensional parallax at a compact form factor, making them promising candidates for emerging augmented and virtual reality systems. However, existing holographic display methods often assume ideal viewing conditions and overlook real-world factors such as eye floaters and eyelashes—obstructions that can severely degrade perceived image quality. In this work, we propose a new metric that quantifies hologram resilience to artifacts and apply it to computer generated holography (CGH) optimization. We call this Artifact Resilient Holography (ARH). We begin by introducing a simulation method that models the effects of pre- and post-pupil obstructions on holographic displays. Our analysis reveals that eyebox regions dominated by low frequencies—produced especially by the smooth-phase holograms broadly adopted in recent holography work—are vulnerable to visual degradation from dynamic obstructions such as floaters and eyelashes. In contrast, random phase holograms spread energy more uniformly across the eyebox spectrum, enabling them to diffract around obstructions without producing prominent artifacts. By characterizing a random phase eyebox using the Rayleigh Distribution, we derive a differentiable metric in the eyebox domain. We then apply this metric to train a real-time neural network-based phase generator, enabling it to produce artifact-resilient 3D holograms that preserve visual fidelity across a range of practical viewing conditions—enhancing both robustness and user interactivity.
全息近眼显示器提供了无与伦比的深度线索、高分辨率图像和逼真的三维视差,使其成为新兴增强现实和虚拟现实系统的有希望的候选者。然而,现有的全息显示方法通常假设理想的观看条件,而忽略了现实世界的因素,如眼球漂浮物和睫毛,这些障碍物会严重降低感知图像质量。在这项工作中,我们提出了一种新的度量来量化全息图对伪影的弹性,并将其应用于计算机生成全息(CGH)优化。我们称之为神器复原全息(ARH)。我们首先介绍了一种模拟方法,该方法模拟了瞳孔前和瞳孔后障碍物对全息显示器的影响。我们的分析表明,低频占主导地位的眼箱区域——尤其是在最近的全息摄影工作中广泛采用的平滑相位全息图——容易受到动态障碍物(如漂浮物和睫毛)的视觉退化。相比之下,随机相位全息图在眼框光谱中更均匀地传播能量,使它们能够绕过障碍物而不会产生明显的伪影。利用瑞利分布对随机相位眼盒进行表征,得到了眼盒域中的可微度量。然后,我们应用此度量来训练基于实时神经网络的相位生成器,使其能够生成伪影弹性3D全息图,在一系列实际观看条件下保持视觉保真度,从而增强鲁棒性和用户交互性。
{"title":"Artifact-Resilient Real-Time Holography","authors":"Victor Chu, Oscar Pueyo-Ciutad, Ethan Tseng, Florian Schiffers, Grace Kuo, Nathan Matsuda, Alberto Redo-Sanchez, Douglas Lanman, Oliver Cossairt, Felix Heide","doi":"10.1145/3763361","DOIUrl":"https://doi.org/10.1145/3763361","url":null,"abstract":"Holographic near-eye displays promise unparalleled depth cues, high-resolution imagery, and realistic three-dimensional parallax at a compact form factor, making them promising candidates for emerging augmented and virtual reality systems. However, existing holographic display methods often assume ideal viewing conditions and overlook real-world factors such as eye floaters and eyelashes—obstructions that can severely degrade perceived image quality. In this work, we propose a new metric that quantifies hologram resilience to artifacts and apply it to computer generated holography (CGH) optimization. We call this Artifact Resilient Holography (ARH). We begin by introducing a simulation method that models the effects of pre- and post-pupil obstructions on holographic displays. Our analysis reveals that eyebox regions dominated by low frequencies—produced especially by the smooth-phase holograms broadly adopted in recent holography work—are vulnerable to visual degradation from dynamic obstructions such as floaters and eyelashes. In contrast, random phase holograms spread energy more uniformly across the eyebox spectrum, enabling them to diffract around obstructions without producing prominent artifacts. By characterizing a random phase eyebox using the Rayleigh Distribution, we derive a differentiable metric in the eyebox domain. We then apply this metric to train a real-time neural network-based phase generator, enabling it to produce artifact-resilient 3D holograms that preserve visual fidelity across a range of practical viewing conditions—enhancing both robustness and user interactivity.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"26 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voyager: Long-Range and World-Consistent Video Diffusion for Explorable 3D Scene Generation 航海家:远程和世界一致的视频扩散可探索的3D场景生成
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763330
Tianyu Huang, Wangguandong Zheng, Tengfei Wang, Yuhao Liu, Zhenwei Wang, Junta Wu, Jie Jiang, Hui Li, Rynson Lau, Wangmeng Zuo, Chunchao Guo
Real-world applications like video gaming and virtual reality often demand the ability to model 3D scenes that users can explore along custom camera trajectories. While significant progress has been made in generating 3D objects from text or images, creating long-range, 3D-consistent, explorable 3D scenes remains a complex and challenging problem. In this work, we present Voyager , a novel video diffusion framework that generates world-consistent 3D point-cloud sequences from a single image with user-defined camera path. Unlike existing approaches, Voyager achieves end-to-end scene generation and reconstruction with inherent consistency across frames, eliminating the need for 3D reconstruction pipelines (e.g., structure-from-motion or multi-view stereo). Our method integrates three key components: 1) World-Consistent Video Diffusion : A unified architecture that jointly generates aligned RGB and depth video sequences, conditioned on existing world observation to ensure global coherence 2) Long-Range World Exploration : An efficient world cache with point culling and an auto-regressive inference with smooth video sampling for iterative scene extension with context-aware consistency, and 3) Scalable Data Engine : A video reconstruction pipeline that automates camera pose estimation and metric depth prediction for arbitrary videos, enabling large-scale, diverse training data curation without manual 3D annotations. Collectively, these designs result in a clear improvement over existing methods in visual quality and geometric accuracy, with versatile applications. Code for this paper are at https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager.
视频游戏和虚拟现实等现实世界的应用通常需要能够模拟3D场景,用户可以沿着自定义的相机轨迹探索。虽然在从文本或图像生成3D对象方面取得了重大进展,但创建远程、3D一致、可探索的3D场景仍然是一个复杂而具有挑战性的问题。在这项工作中,我们提出了Voyager,这是一个新颖的视频扩散框架,可以从具有用户定义的相机路径的单个图像中生成世界一致的3D点云序列。与现有的方法不同,Voyager实现了端到端的场景生成和重建,具有跨帧的内在一致性,消除了对3D重建管道的需求(例如,运动结构或多视图立体)。我们的方法集成了三个关键组件:1)世界一致的视频扩散:一个统一的架构,共同生成对齐的RGB和深度视频序列,以现有的世界观测为条件,以确保全球一致性;2)远程世界探索:一个有效的世界缓存,带有点剔除和自动回归推理,带有平滑视频采样,用于迭代场景扩展,具有上下文感知一致性;3)可扩展的数据引擎:一个视频重建管道,可自动对任意视频进行相机姿态估计和度量深度预测,实现大规模、多样化的训练数据管理,无需手动3D注释。总的来说,这些设计在视觉质量和几何精度方面明显优于现有方法,具有广泛的应用。本文的代码见https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager。
{"title":"Voyager: Long-Range and World-Consistent Video Diffusion for Explorable 3D Scene Generation","authors":"Tianyu Huang, Wangguandong Zheng, Tengfei Wang, Yuhao Liu, Zhenwei Wang, Junta Wu, Jie Jiang, Hui Li, Rynson Lau, Wangmeng Zuo, Chunchao Guo","doi":"10.1145/3763330","DOIUrl":"https://doi.org/10.1145/3763330","url":null,"abstract":"Real-world applications like video gaming and virtual reality often demand the ability to model 3D scenes that users can explore along custom camera trajectories. While significant progress has been made in generating 3D objects from text or images, creating long-range, 3D-consistent, explorable 3D scenes remains a complex and challenging problem. In this work, we present <jats:italic toggle=\"yes\">Voyager</jats:italic> , a novel video diffusion framework that generates world-consistent 3D point-cloud sequences from a single image with user-defined camera path. Unlike existing approaches, Voyager achieves end-to-end scene generation and reconstruction with inherent consistency across frames, eliminating the need for 3D reconstruction pipelines (e.g., structure-from-motion or multi-view stereo). Our method integrates three key components: 1) World-Consistent Video Diffusion : A unified architecture that jointly generates aligned RGB and depth video sequences, conditioned on existing world observation to ensure global coherence 2) Long-Range World Exploration : An efficient world cache with point culling and an auto-regressive inference with smooth video sampling for iterative scene extension with context-aware consistency, and 3) Scalable Data Engine : A video reconstruction pipeline that automates camera pose estimation and metric depth prediction for arbitrary videos, enabling large-scale, diverse training data curation without manual 3D annotations. Collectively, these designs result in a clear improvement over existing methods in visual quality and geometric accuracy, with versatile applications. Code for this paper are at https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"34 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto Hair Card Extraction for Smooth Hair with Differentiable Rendering 自动头发卡提取光滑的头发与可微分渲染
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763295
Zhongtian Zheng, Tao Huang, Haozhe Su, Xueqi Ma, Yuefan Shen, Tongtong Wang, Yin Yang, Xifeng Gao, Zherong Pan, Kui Wu
Hair cards remain a widely used representation for hair modeling in real-time applications, offering a practical trade-off between visual fidelity, memory usage, and performance. However, generating high-quality hair card models remains a challenging and labor-intensive task. This work presents an automated pipeline for converting strand-based hair models into hair card models with a limited number of cards and textures while preserving the hairstyle appearance. Our key idea is a novel differentiable representation where each strand is encoded as a projected 2D curve in the texture space, which enables end-to-end optimization with differentiable rendering while respecting the structures of the hair geometry. Based on this representation, we develop a novel algorithm pipeline, where we first cluster hair strands into initial hair cards and project the strands into the texture space. We then conduct a two-stage optimization, where our first stage optimizes the orientation of each hair card separately, and after strand projection, our second stage conducts joint optimization over the entire hair card model for fine-tuning. Our method is evaluated on a range of hairstyles, including straight, wavy, curly, and coily hair. To capture the appearance of short or coily hair, our method comes with support for hair caps and cross-card.
头发卡仍然是实时应用中广泛使用的头发建模表示,在视觉保真度、内存使用和性能之间提供了一个实际的权衡。然而,生成高质量的发卡模型仍然是一项具有挑战性和劳动密集型的任务。这项工作提出了一个自动化的流水线,用于将基于发丝的头发模型转换为具有有限数量的卡片和纹理的发卡模型,同时保留发型外观。我们的关键思想是一种新颖的可微表示,其中每条线都被编码为纹理空间中的投影2D曲线,这使得端到端的可微渲染优化成为可能,同时尊重头发的几何结构。在此基础上,我们开发了一种新的算法管道,首先将发丝聚类到初始发卡中,并将发丝投影到纹理空间中。然后我们进行两阶段的优化,第一阶段分别优化每个发卡的方向,在进行发丝投影后,第二阶段对整个发卡模型进行联合优化进行微调。我们的方法是在一系列发型上进行评估的,包括直发、波浪发、卷发和卷发。捕捉短或卷曲的头发的外观,我们的方法来支持发帽和交叉卡。
{"title":"Auto Hair Card Extraction for Smooth Hair with Differentiable Rendering","authors":"Zhongtian Zheng, Tao Huang, Haozhe Su, Xueqi Ma, Yuefan Shen, Tongtong Wang, Yin Yang, Xifeng Gao, Zherong Pan, Kui Wu","doi":"10.1145/3763295","DOIUrl":"https://doi.org/10.1145/3763295","url":null,"abstract":"Hair cards remain a widely used representation for hair modeling in real-time applications, offering a practical trade-off between visual fidelity, memory usage, and performance. However, generating high-quality hair card models remains a challenging and labor-intensive task. This work presents an automated pipeline for converting strand-based hair models into hair card models with a limited number of cards and textures while preserving the hairstyle appearance. Our key idea is a novel differentiable representation where each strand is encoded as a projected 2D curve in the texture space, which enables end-to-end optimization with differentiable rendering while respecting the structures of the hair geometry. Based on this representation, we develop a novel algorithm pipeline, where we first cluster hair strands into initial hair cards and project the strands into the texture space. We then conduct a two-stage optimization, where our first stage optimizes the orientation of each hair card separately, and after strand projection, our second stage conducts joint optimization over the entire hair card model for fine-tuning. Our method is evaluated on a range of hairstyles, including straight, wavy, curly, and coily hair. To capture the appearance of short or coily hair, our method comes with support for hair caps and cross-card.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"12 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resolution Where It Counts: Hash-based GPU-Accelerated 3D Reconstruction via Variance-Adaptive Voxel Grids 分辨率计算:基于哈希的gpu加速3D重建通过方差自适应体素网格
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-20 DOI: 10.1145/3777909
Lorenzo De Rebotti, Emanuele Giacomini, Giorgio Grisetti, Luca Di Giammarino
Efficient and scalable 3D surface reconstruction from range data remains a core challenge in computer graphics and vision, particularly in real-time and resource-constrained scenarios. Traditional volumetric methods based on fixed-resolution voxel grids or hierarchical structures like octrees often suffer from memory inefficiency, computational overhead, and a lack of GPU support. We propose a novel variance-adaptive, multi-resolution voxel grid that dynamically adjusts voxel size based on the local variance of signed distance field (SDF) observations. Unlike prior multi-resolution approaches that rely on recursive octree structures, our method leverages a flat spatial hash table to store all voxel blocks, supporting constant-time access and full GPU parallelism. This design enables high memory efficiency, and real-time scalability. We further demonstrate how our representation supports GPU-accelerated rendering through a parallel quad-tree structure for Gaussian Splatting, enabling effective control over splat density. Our open-source CUDA/C++ implementation achieves up to 13× speedup and 4× lower memory usage compared to fixed-resolution baselines, while maintaining on par results in terms of reconstruction accuracy, offering a practical and extensible solution for high-performance 3D reconstruction.
从距离数据中高效、可扩展的3D表面重建仍然是计算机图形学和视觉的核心挑战,特别是在实时和资源受限的情况下。基于固定分辨率体素网格或八叉树等分层结构的传统体积方法通常存在内存效率低下、计算开销和缺乏GPU支持的问题。提出了一种基于符号距离场(SDF)观测值的局部方差动态调整体素大小的方差自适应多分辨率体素网格。与之前依赖于递归八叉树结构的多分辨率方法不同,我们的方法利用平面空间哈希表来存储所有体素块,支持恒定时间访问和完全GPU并行性。该设计实现了高内存效率和实时可扩展性。我们进一步演示了我们的表示如何通过并行四叉树结构支持高斯飞溅的gpu加速渲染,从而有效控制飞溅密度。与固定分辨率基线相比,我们的开源CUDA/ c++实现实现了高达13倍的加速和4倍的内存使用,同时在重建精度方面保持了同等的结果,为高性能3D重建提供了实用和可扩展的解决方案。
{"title":"Resolution Where It Counts: Hash-based GPU-Accelerated 3D Reconstruction via Variance-Adaptive Voxel Grids","authors":"Lorenzo De Rebotti, Emanuele Giacomini, Giorgio Grisetti, Luca Di Giammarino","doi":"10.1145/3777909","DOIUrl":"https://doi.org/10.1145/3777909","url":null,"abstract":"Efficient and scalable 3D surface reconstruction from range data remains a core challenge in computer graphics and vision, particularly in real-time and resource-constrained scenarios. Traditional volumetric methods based on fixed-resolution voxel grids or hierarchical structures like octrees often suffer from memory inefficiency, computational overhead, and a lack of GPU support. We propose a novel variance-adaptive, multi-resolution voxel grid that dynamically adjusts voxel size based on the local variance of signed distance field (SDF) observations. Unlike prior multi-resolution approaches that rely on recursive octree structures, our method leverages a flat spatial hash table to store all voxel blocks, supporting constant-time access and full GPU parallelism. This design enables high memory efficiency, and real-time scalability. We further demonstrate how our representation supports GPU-accelerated rendering through a parallel quad-tree structure for Gaussian Splatting, enabling effective control over splat density. Our open-source CUDA/C++ implementation achieves up to 13× speedup and 4× lower memory usage compared to fixed-resolution baselines, while maintaining on par results in terms of reconstruction accuracy, offering a practical and extensible solution for high-performance 3D reconstruction.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"204 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145554482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voronoi Rooms: Dynamic Visibility Modulation of Overlapping Spaces for Telepresence Voronoi房间:远程呈现重叠空间的动态可视性调制
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-20 DOI: 10.1145/3777900
Taehei Kim, Jihun Shin, Hyeshim Kim, Hyuckjin Jang, Jiho Kang, Sung-Hee Lee
We propose a multi-user Mixed Reality (MR) telepresence system that allows users to interact by seamlessly visualizing remote environments and avatars overlaid onto their local physical space. Building on prior shared-space approaches, our method first aligns overlapping rooms to maximize a shared space —a common area containing matched real and virtual objects where all users can interact. Uniquely, our system extends beyond this shared space by visualizing non-shared spaces, the remaining part of each room, allowing users to inhabit these distinct areas. To address the issue of overlap between non-shared spaces, we dynamically adjust their visibility based on user proximity, using a Voronoi diagram to prioritize subspaces closer to each user. Visualizing the surrounding space of each user conveys spatial context, helping others interpret their behavior within their environment. Visibility is updated in real time as users move, maintaining a coherent sense of spatial awareness. Through a user study, we demonstrate that our system enhances enjoyment, spatial understanding, and presence compared to shared-space-only approaches. Quantitative results further show that our dynamic visibility modulation improves both personal space preservation and space accessibility relative to static methods. Overall, our system provides users with a seamless, dynamically connected, and shared multi-room environment.
我们提出了一个多用户混合现实(MR)远程呈现系统,允许用户通过无缝可视化远程环境和覆盖在其本地物理空间上的化身进行交互。在先前的共享空间方法的基础上,我们的方法首先对齐重叠的房间以最大化共享空间-一个包含匹配的真实和虚拟对象的公共区域,所有用户都可以在其中进行交互。独特的是,我们的系统通过可视化非共享空间扩展了这个共享空间,每个房间的其余部分,允许用户居住在这些不同的区域。为了解决非共享空间之间的重叠问题,我们根据用户接近度动态调整其可见性,使用Voronoi图来优先考虑靠近每个用户的子空间。可视化每个用户的周围空间传达空间背景,帮助其他人理解他们在环境中的行为。可见性随着用户的移动而实时更新,保持连贯的空间意识。通过用户研究,我们证明了与仅共享空间的方法相比,我们的系统增强了享受、空间理解和存在感。定量结果进一步表明,相对于静态方法,动态可视性调制提高了个人空间保存和空间可达性。总的来说,我们的系统为用户提供了一个无缝的、动态连接的、共享的多房间环境。
{"title":"Voronoi Rooms: Dynamic Visibility Modulation of Overlapping Spaces for Telepresence","authors":"Taehei Kim, Jihun Shin, Hyeshim Kim, Hyuckjin Jang, Jiho Kang, Sung-Hee Lee","doi":"10.1145/3777900","DOIUrl":"https://doi.org/10.1145/3777900","url":null,"abstract":"We propose a multi-user Mixed Reality (MR) telepresence system that allows users to interact by seamlessly visualizing remote environments and avatars overlaid onto their local physical space. Building on prior shared-space approaches, our method first aligns overlapping rooms to maximize a <jats:italic toggle=\"yes\">shared space</jats:italic> —a common area containing matched real and virtual objects where all users can interact. Uniquely, our system extends beyond this shared space by visualizing non-shared spaces, the remaining part of each room, allowing users to inhabit these distinct areas. To address the issue of overlap between non-shared spaces, we dynamically adjust their visibility based on user proximity, using a Voronoi diagram to prioritize subspaces closer to each user. Visualizing the surrounding space of each user conveys spatial context, helping others interpret their behavior within their environment. Visibility is updated in real time as users move, maintaining a coherent sense of spatial awareness. Through a user study, we demonstrate that our system enhances enjoyment, spatial understanding, and presence compared to shared-space-only approaches. Quantitative results further show that our dynamic visibility modulation improves both personal space preservation and space accessibility relative to static methods. Overall, our system provides users with a seamless, dynamically connected, and shared multi-room environment.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"6 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145554480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1