首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
Generative Head-Mounted Camera Captures for Photorealistic Avatars 生成式头戴式相机捕捉逼真的头像
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763300
Shaojie Bai, Seunghyeon Seo, Yida Wang, Chenghui Li, Owen Wang, Te-Li Wang, Tianyang Ma, Jason Saragih, Shih-En Wei, Nojun Kwak, Hyung Jun(John) Kim
Enabling photorealistic avatar animations in virtual and augmented reality (VR/AR) has been challenging because of the difficulty of obtaining ground truth state of faces. It is physically impossible to obtain synchronized images from head-mounted cameras (HMC) sensing input, which has partial observations in infrared (IR), and an array of outside-in dome cameras, which have full observations that match avatars' appearance. Prior works relying on analysis-by-synthesis methods could generate accurate ground truth, but suffer from imperfect disentanglement between expression and style in their personalized training. The reliance of extensive paired captures (HMC and dome) for the same subject makes it operationally expensive to collect large-scale datasets, which cannot be reused for different HMC viewpoints and lighting. In this work, we propose a novel generative approach, Generative HMC (GenHMC), that leverages large unpaired HMC captures , which are much easier to collect, to directly generate high-quality synthetic HMC images given any conditioning avatar state from dome captures. We show that our method is able to properly disentangle the input conditioning signal that specifies facial expression and viewpoint, from facial appearance, leading to more accurate ground truth. Furthermore, our method can generalize to unseen identities, removing the reliance on the paired captures. We demonstrate these breakthroughs by both evaluating synthetic HMC images and universal face encoders trained from these new HMC-avatar correspondences, which achieve better data efficiency and state-of-the-art accuracy.
在虚拟和增强现实(VR/AR)中实现逼真的虚拟角色动画一直具有挑战性,因为难以获得人脸的真实状态。从物理上来说,不可能从头戴式摄像机(HMC)传感输入获得同步图像,头戴式摄像机(HMC)具有部分红外(IR)观测,而一系列由外至内的圆顶摄像机(dome camera)具有与化身外观匹配的完整观测。以往依靠综合分析方法的作品能够生成准确的基础真理,但在个性化的训练中,表达与风格的分离并不完善。对同一主题的大量配对捕获(HMC和dome)的依赖使得收集大规模数据集的操作成本很高,这些数据集不能用于不同的HMC视点和照明。在这项工作中,我们提出了一种新的生成方法,生成HMC (GenHMC),它利用更容易收集的大型未配对HMC捕获,直接生成高质量的合成HMC图像,给定任何条件的头像状态。我们表明,我们的方法能够正确地将指定面部表情和观点的输入条件信号从面部外观中分离出来,从而获得更准确的基础事实。此外,我们的方法可以推广到看不见的身份,消除了对成对捕获的依赖。我们通过评估合成HMC图像和从这些新的HMC-avatar对应中训练的通用人脸编码器来证明这些突破,这些编码器实现了更好的数据效率和最先进的精度。
{"title":"Generative Head-Mounted Camera Captures for Photorealistic Avatars","authors":"Shaojie Bai, Seunghyeon Seo, Yida Wang, Chenghui Li, Owen Wang, Te-Li Wang, Tianyang Ma, Jason Saragih, Shih-En Wei, Nojun Kwak, Hyung Jun(John) Kim","doi":"10.1145/3763300","DOIUrl":"https://doi.org/10.1145/3763300","url":null,"abstract":"Enabling photorealistic avatar animations in virtual and augmented reality (VR/AR) has been challenging because of the difficulty of obtaining ground truth state of faces. It is <jats:italic toggle=\"yes\">physically impossible</jats:italic> to obtain synchronized images from head-mounted cameras (HMC) sensing input, which has partial observations in infrared (IR), and an array of outside-in dome cameras, which have full observations that match avatars' appearance. Prior works relying on analysis-by-synthesis methods could generate accurate ground truth, but suffer from imperfect disentanglement between expression and style in their personalized training. The reliance of extensive paired captures (HMC and dome) for the <jats:italic toggle=\"yes\">same</jats:italic> subject makes it operationally expensive to collect large-scale datasets, which cannot be reused for different HMC viewpoints and lighting. In this work, we propose a novel generative approach, Generative HMC (GenHMC), that leverages <jats:italic toggle=\"yes\">large unpaired HMC captures</jats:italic> , which are much easier to collect, to directly generate high-quality <jats:italic toggle=\"yes\">synthetic</jats:italic> HMC images given any conditioning avatar state from dome captures. We show that our method is able to properly disentangle the input conditioning signal that specifies facial expression and viewpoint, from facial appearance, leading to more accurate ground truth. Furthermore, our method can generalize to unseen identities, removing the reliance on the paired captures. We demonstrate these breakthroughs by both evaluating synthetic HMC images and universal face encoders trained from these new HMC-avatar correspondences, which achieve better data efficiency and state-of-the-art accuracy.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"28 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fire-X: Extinguishing Fire with Stoichiometric Heat Release Fire- x:用化学计量热释放灭火
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763338
Helge Wrede, Anton Wagner, Sarker Miraz Mahfuz, Wojtek Palubicki, Dominik Michels, Sören Pirk
We present a novel combustion simulation framework to model fire phenomena across solids, liquids, and gases. Our approach extends traditional fluid solvers by incorporating multi-species thermodynamics and reactive transport for fuel, oxygen, nitrogen, carbon dioxide, water vapor, and residuals. Combustion reactions are governed by stoichiometry-dependent heat release, allowing an accurate simulation of premixed and diffusive flames with varying intensity and composition. We support a wide range of scenarios including jet fires, water suppression (sprays and sprinklers), fuel evaporation, and starvation conditions. Our framework enables interactive heat sources, fire detectors, and realistic rendering of flames (e.g., laminar-to-turbulent transitions and blue-to-orange color shifts). Our key contributions include the tight coupling of species dynamics with thermodynamic feedback, evaporation modeling, and a hybrid SPH-grid representation for the efficient simulation of extinguishing fires. We validate our method through numerous experiments that demonstrate its versatility in both indoor and outdoor fire scenarios.
我们提出了一个新的燃烧模拟框架,以模拟跨越固体,液体和气体的火灾现象。我们的方法扩展了传统的流体求解方法,结合了多物种热力学和燃料、氧气、氮气、二氧化碳、水蒸气和残留物的反应传输。燃烧反应由化学计量学相关的热释放控制,允许精确模拟具有不同强度和成分的预混和扩散火焰。我们支持广泛的场景,包括喷射火灾,水抑制(喷雾器和洒水器),燃料蒸发和饥饿条件。我们的框架使交互式热源、火灾探测器和火焰的逼真渲染(例如,层流到湍流的转换和蓝色到橙色的颜色转换)成为可能。我们的主要贡献包括物种动力学与热力学反馈的紧密耦合,蒸发建模,以及用于有效模拟灭火的混合sph -网格表示。我们通过大量实验验证了我们的方法,证明了它在室内和室外火灾场景中的通用性。
{"title":"Fire-X: Extinguishing Fire with Stoichiometric Heat Release","authors":"Helge Wrede, Anton Wagner, Sarker Miraz Mahfuz, Wojtek Palubicki, Dominik Michels, Sören Pirk","doi":"10.1145/3763338","DOIUrl":"https://doi.org/10.1145/3763338","url":null,"abstract":"We present a novel combustion simulation framework to model fire phenomena across solids, liquids, and gases. Our approach extends traditional fluid solvers by incorporating multi-species thermodynamics and reactive transport for fuel, oxygen, nitrogen, carbon dioxide, water vapor, and residuals. Combustion reactions are governed by stoichiometry-dependent heat release, allowing an accurate simulation of premixed and diffusive flames with varying intensity and composition. We support a wide range of scenarios including jet fires, water suppression (sprays and sprinklers), fuel evaporation, and starvation conditions. Our framework enables interactive heat sources, fire detectors, and realistic rendering of flames (e.g., laminar-to-turbulent transitions and blue-to-orange color shifts). Our key contributions include the tight coupling of species dynamics with thermodynamic feedback, evaporation modeling, and a hybrid SPH-grid representation for the efficient simulation of extinguishing fires. We validate our method through numerous experiments that demonstrate its versatility in both indoor and outdoor fire scenarios.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Split4D: Decomposed 4D Scene Reconstruction Without Video Segmentation Split4D:没有视频分割的分解4D场景重建
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763343
Yongzhen Hu, Yihui Yang, Haotong Lin, Yifan Wang, Junting Dong, Yifu Deng, Xinyu Zhu, Fan Jia, Hujun Bao, Xiaowei Zhou, Sida Peng
This paper addresses the problem of decomposed 4D scene reconstruction from multi-view videos. Recent methods achieve this by lifting video segmentation results to a 4D representation through differentiable rendering techniques. Therefore, they heavily rely on the quality of video segmentation maps, which are often unstable, leading to unreliable reconstruction results. To overcome this challenge, our key idea is to represent the decomposed 4D scene with the Freetime FeatureGS and design a streaming feature learning strategy to accurately recover it from per-image segmentation maps, eliminating the need for video segmentation. Freetime FeatureGS models the dynamic scene as a set of Gaussian primitives with learnable features and linear motion ability, allowing them to move to neighboring regions over time. We apply a contrastive loss to Freetime FeatureGS, forcing primitive features to be close or far apart based on whether their projections belong to the same instance in the 2D segmentation map. As our Gaussian primitives can move across time, it naturally extends the feature learning to the temporal dimension, achieving 4D segmentation. Furthermore, we sample observations for training in a temporally ordered manner, enabling the streaming propagation of features over time and effectively avoiding local minima during the optimization process. Experimental results on several datasets show that the reconstruction quality of our method outperforms recent methods by a large margin.
本文研究了多视点视频分解后的4D场景重建问题。最近的方法通过可微分渲染技术将视频分割结果提升到4D表示来实现这一点。因此,它们严重依赖于视频分割图的质量,而这些分割图往往不稳定,导致重建结果不可靠。为了克服这一挑战,我们的关键思想是用Freetime FeatureGS表示分解的4D场景,并设计一个流特征学习策略,从每个图像分割地图中准确地恢复它,从而消除了对视频分割的需要。Freetime FeatureGS将动态场景建模为一组具有可学习特征和线性运动能力的高斯原语,允许它们随时间移动到邻近区域。我们将对比损失应用于Freetime FeatureGS,根据它们的投影是否属于2D分割图中的同一实例,强制原始特征接近或远离。由于我们的高斯基元可以跨时间移动,它自然地将特征学习扩展到时间维度,实现4D分割。此外,我们以一种时间有序的方式对观察值进行采样,使特征随时间的流传播成为可能,并在优化过程中有效地避免了局部最小值。在多个数据集上的实验结果表明,该方法的重建质量大大优于现有的方法。
{"title":"Split4D: Decomposed 4D Scene Reconstruction Without Video Segmentation","authors":"Yongzhen Hu, Yihui Yang, Haotong Lin, Yifan Wang, Junting Dong, Yifu Deng, Xinyu Zhu, Fan Jia, Hujun Bao, Xiaowei Zhou, Sida Peng","doi":"10.1145/3763343","DOIUrl":"https://doi.org/10.1145/3763343","url":null,"abstract":"This paper addresses the problem of decomposed 4D scene reconstruction from multi-view videos. Recent methods achieve this by lifting video segmentation results to a 4D representation through differentiable rendering techniques. Therefore, they heavily rely on the quality of video segmentation maps, which are often unstable, leading to unreliable reconstruction results. To overcome this challenge, our key idea is to represent the decomposed 4D scene with the Freetime FeatureGS and design a streaming feature learning strategy to accurately recover it from per-image segmentation maps, eliminating the need for video segmentation. Freetime FeatureGS models the dynamic scene as a set of Gaussian primitives with learnable features and linear motion ability, allowing them to move to neighboring regions over time. We apply a contrastive loss to Freetime FeatureGS, forcing primitive features to be close or far apart based on whether their projections belong to the same instance in the 2D segmentation map. As our Gaussian primitives can move across time, it naturally extends the feature learning to the temporal dimension, achieving 4D segmentation. Furthermore, we sample observations for training in a temporally ordered manner, enabling the streaming propagation of features over time and effectively avoiding local minima during the optimization process. Experimental results on several datasets show that the reconstruction quality of our method outperforms recent methods by a large margin.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"4 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scattering-Aware Color Calibration for 3D Printers Using a Simple Calibration Target 使用简单校准目标的3D打印机的散射感知颜色校准
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763293
Tomáš Iser, Tobias Rittig, Alexander Wilkie
We present a novel method for accurately calibrating the optical properties of full-color 3D printers using only a single, directly printable calibration target. Our approach is based on accurate multiple-scattering light transport and estimates the single-scattering albedo and extinction coefficient for each resin. These parameters are essential for both soft-proof rendering of 3D printouts and for advanced, scattering-aware 3D halftoning algorithms. In contrast to previous methods that rely on thin, precisely fabricated resin samples and labor-intensive manual processing, our technique achieves higher accuracy with significantly less effort. Our calibration target is specifically designed to enable algorithmic recovery of each resin's optical properties through a series of one-dimensional and two-dimensional numerical optimizations, applied first on the white and black resins, and then on any remaining resins. The method supports both RGB and spectral calibration, depending on whether a camera or spectrometer is used to capture the calibration target. It also scales linearly with the number of resins, making it well-suited for modern multi-material printers. We validate our approach extensively, first on synthetic and then on real resins across 242 color mixtures, printed thin translucent samples, printed surface textures, and fully textured 3D models with complex geometry, including an eye model and a figurine.
我们提出了一种新的方法来精确校准全彩色3D打印机的光学特性,只使用一个单一的,直接可打印的校准目标。我们的方法基于精确的多次散射光传输,并估计每种树脂的单散射反照率和消光系数。这些参数对于3D打印输出的软防渲染和先进的、分散感知的3D半色调算法都是必不可少的。与以前依赖于薄而精确制造的树脂样品和劳动密集型手工处理的方法相比,我们的技术以更少的努力实现了更高的精度。我们的校准目标是专门设计的,通过一系列一维和二维数值优化,首先应用于白色和黑色树脂,然后应用于任何剩余的树脂,使每种树脂的光学特性的算法恢复。该方法支持RGB和光谱校准,具体取决于是使用相机还是光谱仪来捕获校准目标。它还与树脂的数量成线性比例,使其非常适合现代多材料打印机。我们广泛地验证了我们的方法,首先在合成树脂上,然后在242种颜色混合物的真实树脂上,打印薄半透明样品,打印表面纹理,以及具有复杂几何形状的完全纹理3D模型,包括眼睛模型和小雕像。
{"title":"Scattering-Aware Color Calibration for 3D Printers Using a Simple Calibration Target","authors":"Tomáš Iser, Tobias Rittig, Alexander Wilkie","doi":"10.1145/3763293","DOIUrl":"https://doi.org/10.1145/3763293","url":null,"abstract":"We present a novel method for accurately calibrating the optical properties of full-color 3D printers using only a single, directly printable calibration target. Our approach is based on accurate multiple-scattering light transport and estimates the single-scattering albedo and extinction coefficient for each resin. These parameters are essential for both soft-proof rendering of 3D printouts and for advanced, scattering-aware 3D halftoning algorithms. In contrast to previous methods that rely on thin, precisely fabricated resin samples and labor-intensive manual processing, our technique achieves higher accuracy with significantly less effort. Our calibration target is specifically designed to enable algorithmic recovery of each resin's optical properties through a series of one-dimensional and two-dimensional numerical optimizations, applied first on the white and black resins, and then on any remaining resins. The method supports both RGB and spectral calibration, depending on whether a camera or spectrometer is used to capture the calibration target. It also scales linearly with the number of resins, making it well-suited for modern multi-material printers. We validate our approach extensively, first on synthetic and then on real resins across 242 color mixtures, printed thin translucent samples, printed surface textures, and fully textured 3D models with complex geometry, including an eye model and a figurine.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"5 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Waste-to-Value: Reutilized Material Maximization for Additive and Subtractive Hybrid Remanufacturing 废物到价值:增材和减材混合再制造的再利用材料最大化
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763313
Fanchao Zhong, Zhenmin Zhang, Liyuan Wang, Xin Yan, Jikai Liu, Lin Lu, Haisen Zhao
Remanufacturing effectively extends component lifespans by restoring used or end-of-life parts to like-new or even superior conditions, with an emphasis on maximizing reutilized material, especially for high-cost materials. Hybrid manufacturing technology combines the capabilities of additive and subtractive manufacturing, with the ability to add and remove material, enabling it to remanufacture complex shapes and is increasingly being applied in remanufacturing. How to effectively plan the process of additive and subtractive hybrid remanufacturing (ASHRM) to maximize material reutilization has become a key focus of attention. However, current ASHRM process planning methods lack strict consideration of collision-free constraints, hindering practical application. This paper introduces a computational framework to tackle ASHRM process planning for general shapes with strictly considering these constraints. We separate global and local collision-free constraints, employing clipping planes and graph to tackle them respectively, ultimately maximizing the reutilized volume while ensuring these constraints are satisfied. Additionally, we also optimize the setup of the target model that is conducive to maximizing the reutilized volume. Extensive experiments and physical validations on a 5-axis hybrid manufacturing platform demonstrate the effectiveness of our method across various 3D shapes, achieving an average material reutilization of 69% across 12 cases. Code is publicly available at https://github.com/fanchao98/Waste-to-Value.
再制造通过将使用过的或报废的部件恢复到新的甚至更好的状态,有效地延长了部件的寿命,重点是最大限度地重新利用材料,特别是对于高成本材料。混合制造技术结合了增材制造和减材制造的能力,以及增加和去除材料的能力,使其能够再制造复杂的形状,并越来越多地应用于再制造。如何有效地规划增减法混合再制造过程,实现材料再利用最大化,已成为人们关注的焦点。然而,目前的ASHRM工艺规划方法缺乏对无碰撞约束的严格考虑,阻碍了实际应用。本文介绍了一种计算框架,用于严格考虑这些约束条件的一般形状的ASHRM工艺规划。我们分离了全局和局部无碰撞约束,分别使用裁剪平面和图形来处理它们,最终在确保满足这些约束的同时最大化了再利用率。此外,我们还优化了目标模型的设置,使其有利于最大化再利用率。在5轴混合制造平台上进行的大量实验和物理验证证明了我们的方法在各种3D形状上的有效性,在12种情况下实现了69%的平均材料重复利用率。代码可在https://github.com/fanchao98/Waste-to-Value上公开获取。
{"title":"Waste-to-Value: Reutilized Material Maximization for Additive and Subtractive Hybrid Remanufacturing","authors":"Fanchao Zhong, Zhenmin Zhang, Liyuan Wang, Xin Yan, Jikai Liu, Lin Lu, Haisen Zhao","doi":"10.1145/3763313","DOIUrl":"https://doi.org/10.1145/3763313","url":null,"abstract":"Remanufacturing effectively extends component lifespans by restoring used or end-of-life parts to like-new or even superior conditions, with an emphasis on maximizing reutilized material, especially for high-cost materials. Hybrid manufacturing technology combines the capabilities of additive and subtractive manufacturing, with the ability to add and remove material, enabling it to remanufacture complex shapes and is increasingly being applied in remanufacturing. How to effectively plan the process of additive and subtractive hybrid remanufacturing (ASHRM) to maximize material reutilization has become a key focus of attention. However, current ASHRM process planning methods lack strict consideration of collision-free constraints, hindering practical application. This paper introduces a computational framework to tackle ASHRM process planning for general shapes with strictly considering these constraints. We separate global and local collision-free constraints, employing clipping planes and graph to tackle them respectively, ultimately maximizing the reutilized volume while ensuring these constraints are satisfied. Additionally, we also optimize the setup of the target model that is conducive to maximizing the reutilized volume. Extensive experiments and physical validations on a 5-axis hybrid manufacturing platform demonstrate the effectiveness of our method across various 3D shapes, achieving an average material reutilization of 69% across 12 cases. Code is publicly available at https://github.com/fanchao98/Waste-to-Value.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"125 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STGlight: Online Indoor Lighting Estimation via Spatio-Temporal Gaussian Fusion stlight:基于时空高斯融合的室内照明在线估计
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763350
Shiyuan Shen, Zhongyun Bao, Hong Ding, Wenju Xu, Tenghui Lai, Chunxia Xiao
Estimating lighting in indoor scenes is particularly challenging due to diverse distribution of light sources and complexity of scene geometry. Previous methods mainly focused on spatial variability and consistency for a single image or temporal consistency for video sequences. However, these approaches fail to achieve spatio-temporal consistency in video lighting estimation, which restricts applications such as compositing animated models into videos. In this paper, we propose STGlight, a lightweight and effective method for spatio-temporally consistent video lighting estimation, where our network processes a stream of LDR RGB-D video frames while maintaining incrementally updated global representations of both geometry and lighting, enabling the prediction of HDR environment maps at arbitrary locations for each frame. We model indoor lighting with three components: visible light sources providing direct illumination, ambient lighting approximating indirect illumination, and local environment textures producing high-quality specular reflections on glossy objects. To capture spatial-varying lighting, we represent scene geometry with point clouds, which support efficient spatio-temporal fusion and allow us to handle moderately dynamic scenes. To ensure temporal consistency, we apply a transformer-based fusion block that propagates lighting features across frames. Building on this, we further handle dynamic lighting with moving objects or changing light conditions by applying intrinsic decomposition on the point cloud and integrating the decomposed components with a neural fusion module. Experiments show that our online method can effectively predict lighting for any position within the video stream, while maintaining spatial variability and spatio-temporal consistency. Code is available at: https://github.com/nauyihsnehs/STGlight.
由于光源分布的多样性和场景几何的复杂性,室内场景的照明估计尤其具有挑战性。以往的方法主要关注单幅图像的空间变异性和一致性,或视频序列的时间一致性。然而,这些方法无法实现视频照明估计的时空一致性,这限制了将动画模型合成到视频中的应用。在本文中,我们提出了stlight,一种用于时空一致视频照明估计的轻量级有效方法,其中我们的网络处理LDR RGB-D视频帧流,同时保持几何和照明的增量更新全局表示,从而能够预测每帧任意位置的HDR环境地图。我们用三个组件来模拟室内照明:提供直接照明的可见光源,近似间接照明的环境照明,以及在光滑物体上产生高质量镜面反射的局部环境纹理。为了捕捉空间变化的照明,我们用点云表示场景几何,这支持有效的时空融合,并允许我们处理适度动态的场景。为了确保时间一致性,我们应用了一个基于变压器的融合块,它在帧之间传播照明特征。在此基础上,我们通过在点云上应用内在分解并将分解的组件与神经融合模块集成,进一步处理带有移动物体或改变光照条件的动态照明。实验表明,该方法可以有效地预测视频流中任何位置的光照,同时保持空间可变性和时空一致性。代码可从https://github.com/nauyihsnehs/STGlight获得。
{"title":"STGlight: Online Indoor Lighting Estimation via Spatio-Temporal Gaussian Fusion","authors":"Shiyuan Shen, Zhongyun Bao, Hong Ding, Wenju Xu, Tenghui Lai, Chunxia Xiao","doi":"10.1145/3763350","DOIUrl":"https://doi.org/10.1145/3763350","url":null,"abstract":"Estimating lighting in indoor scenes is particularly challenging due to diverse distribution of light sources and complexity of scene geometry. Previous methods mainly focused on spatial variability and consistency for a single image or temporal consistency for video sequences. However, these approaches fail to achieve spatio-temporal consistency in video lighting estimation, which restricts applications such as compositing animated models into videos. In this paper, we propose STGlight, a lightweight and effective method for spatio-temporally consistent video lighting estimation, where our network processes a stream of LDR RGB-D video frames while maintaining incrementally updated global representations of both geometry and lighting, enabling the prediction of HDR environment maps at arbitrary locations for each frame. We model indoor lighting with three components: visible light sources providing direct illumination, ambient lighting approximating indirect illumination, and local environment textures producing high-quality specular reflections on glossy objects. To capture spatial-varying lighting, we represent scene geometry with point clouds, which support efficient spatio-temporal fusion and allow us to handle moderately dynamic scenes. To ensure temporal consistency, we apply a transformer-based fusion block that propagates lighting features across frames. Building on this, we further handle dynamic lighting with moving objects or changing light conditions by applying intrinsic decomposition on the point cloud and integrating the decomposed components with a neural fusion module. Experiments show that our online method can effectively predict lighting for any position within the video stream, while maintaining spatial variability and spatio-temporal consistency. Code is available at: https://github.com/nauyihsnehs/STGlight.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"115 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Highly-Efficient Hybrid Simulation System for Flight Controller Design and Evaluation of Unmanned Aerial Vehicles 面向无人机飞行控制器设计与评估的高效混合仿真系统
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763283
Jiwei Wang, Wenbin Song, Yicheng Fan, Yang Wang, Xiaopei Liu
Unmanned aerial vehicles (UAVs) have demonstrated remarkable efficacy across diverse fields. Nevertheless, developing flight controllers tailored to a specific UAV design, particularly in environments with strong fluid-interactive dynamics, remains challenging. Conventional controller design experiences often fall short in such cases, rendering it infeasible to apply time-tested practices. Consequently, a simulation test bed becomes indispensable for controller design and evaluation prior to its actual implementation on the physical UAV. This platform should allow for meticulous adjustment of controllers and should be able to transfer to real-world systems without significant performance degradation. Existing simulators predominantly hinge on empirical models due to high efficiency, often overlooking the dynamic interplay between the UAV and the surrounding airflow. This makes it difficult to mimic more complex flight maneuvers, such as an abrupt midair halt inside narrow channels, in which the UAV may experience strong fluid-structure interactions. On the other hand, simulators considering the complex surrounding airflow are extremely slow and inadequate to support the design and evaluation of flight controllers. In this paper, we present a novel remedy for highly-efficient UAV flight simulations, which entails a hybrid modeling that deftly combines our novel far-field adaptive block-based fluid simulator with parametric empirical models situated near the boundary of the UAV, with the model parameters automatically calibrated. With this newly devised simulator, a broader spectrum of flight scenarios can be explored for controller design and assessment, encompassing those influenced by potent close-proximity effects, or situations where multiple UAVs operate in close quarters. The practical worth of our simulator has been authenticated through comparisons with actual UAV flight data. We further showcase its utility in designing flight controllers for fixed-wing, multi-rotor, and hybrid UAVs, and even exemplify its application when multiple UAVs are involved, underlining the unique value of our system for flight controllers.
无人驾驶飞行器(uav)在各个领域都显示出显着的功效。然而,开发针对特定无人机设计的飞行控制器,特别是在具有强流体交互动力学的环境中,仍然具有挑战性。在这种情况下,传统的控制器设计经验往往不足,使得应用经过时间考验的实践变得不可行。因此,在物理无人机上实际实现控制器之前,仿真试验台成为控制器设计和评估不可或缺的一部分。这个平台应该允许对控制器进行细致的调整,并且应该能够在没有显著性能下降的情况下转移到现实世界的系统。现有的仿真器由于效率高,主要依赖于经验模型,往往忽略了无人机与周围气流之间的动态相互作用。这使得模拟更复杂的飞行动作变得困难,比如在狭窄通道内突然在半空中停止,在这种情况下,无人机可能会经历强烈的流固相互作用。另一方面,考虑复杂周围气流的模拟器速度极慢,不足以支持飞行控制器的设计和评估。在本文中,我们提出了一种高效无人机飞行模拟的新方法,即将我们的新型远场自适应基于块的流体模拟器与位于无人机边界附近的参数经验模型巧妙地结合起来,并自动校准模型参数的混合建模。有了这个新设计的模拟器,可以探索更广泛的飞行场景,用于控制器的设计和评估,包括那些受强大的近距离效应影响的情况,或者多架无人机在近距离操作的情况。通过与实际无人机飞行数据的对比,验证了该模拟器的实用价值。我们进一步展示了它在设计固定翼、多旋翼和混合无人机的飞行控制器方面的实用性,甚至举例说明了它在涉及多架无人机时的应用,强调了我们的系统对飞行控制器的独特价值。
{"title":"A Highly-Efficient Hybrid Simulation System for Flight Controller Design and Evaluation of Unmanned Aerial Vehicles","authors":"Jiwei Wang, Wenbin Song, Yicheng Fan, Yang Wang, Xiaopei Liu","doi":"10.1145/3763283","DOIUrl":"https://doi.org/10.1145/3763283","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have demonstrated remarkable efficacy across diverse fields. Nevertheless, developing flight controllers tailored to a specific UAV design, particularly in environments with strong fluid-interactive dynamics, remains challenging. Conventional controller design experiences often fall short in such cases, rendering it infeasible to apply time-tested practices. Consequently, a simulation test bed becomes indispensable for controller design and evaluation prior to its actual implementation on the physical UAV. This platform should allow for meticulous adjustment of controllers and should be able to transfer to real-world systems without significant performance degradation. Existing simulators predominantly hinge on empirical models due to high efficiency, often overlooking the dynamic interplay between the UAV and the surrounding airflow. This makes it difficult to mimic more complex flight maneuvers, such as an abrupt midair halt inside narrow channels, in which the UAV may experience strong fluid-structure interactions. On the other hand, simulators considering the complex surrounding airflow are extremely slow and inadequate to support the design and evaluation of flight controllers. In this paper, we present a novel remedy for highly-efficient UAV flight simulations, which entails a hybrid modeling that deftly combines our novel far-field adaptive block-based fluid simulator with parametric empirical models situated near the boundary of the UAV, with the model parameters automatically calibrated. With this newly devised simulator, a broader spectrum of flight scenarios can be explored for controller design and assessment, encompassing those influenced by potent close-proximity effects, or situations where multiple UAVs operate in close quarters. The practical worth of our simulator has been authenticated through comparisons with actual UAV flight data. We further showcase its utility in designing flight controllers for fixed-wing, multi-rotor, and hybrid UAVs, and even exemplify its application when multiple UAVs are involved, underlining the unique value of our system for flight controllers.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"34 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrafast and Controllable Online Motion Retargeting for Game Scenarios 游戏场景的超快速可控在线运动重定向
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763351
Tianze Guo, Zhedong Chen, Yi Jiang, Linjun Wu, Xilei Wei, Lang Xu, Yeshuang Lin, He Wang, Xiaogang Jin
Geometry-aware online motion retargeting is crucial for real-time character animation in gaming and virtual reality. However, existing methods often rely on complex optimization procedures or deep neural networks, which constrain their applicability in real-time scenarios. Moreover, they offer limited control over fine-grained motion details involved in character interactions, resulting in less realistic outcomes. To overcome these limitations, we propose a novel optimization framework for ultrafast, lightweight motion retargeting with joint-level control (i.e., controls over joint position, bone orientation, etc,). Our approach introduces a semantic-aware objective grounded in a spherical geometry representation, coupled with a bone-length-preserving algorithm that iteratively solves this objective. This formulation preserves spatial relationships among spheres, thereby maintaining motion semantics, mitigating interpenetration, and ensuring contact. It is lightweight and computationally efficient, making it particularly suitable for time-critical real-time deployment scenarios. Additionally, we incorporate a heuristic optimization strategy that enables rapid convergence and precise joint-level control. We evaluate our method against state-of-the-art approaches on the Mixamo dataset, and experimental results demonstrate that it achieves comparable performance while delivering an order-of-magnitude speedup.
几何感知在线运动重定向对于游戏和虚拟现实中的实时角色动画至关重要。然而,现有的方法往往依赖于复杂的优化过程或深度神经网络,这限制了它们在实时场景中的适用性。此外,它们对涉及角色互动的细粒度运动细节的控制有限,导致结果不太真实。为了克服这些限制,我们提出了一种新的优化框架,用于通过关节水平控制(即控制关节位置,骨骼方向等)实现超快速,轻量级的运动重定向。我们的方法引入了基于球面几何表示的语义感知目标,以及迭代解决该目标的骨长度保持算法。这个公式保留了球体之间的空间关系,从而保持了运动语义,减轻了相互渗透并确保了接触。它轻量级且计算效率高,因此特别适合时间紧迫的实时部署场景。此外,我们还采用了一种启发式优化策略,使快速收敛和精确的关节水平控制成为可能。我们对Mixamo数据集上最先进的方法进行了评估,实验结果表明,它在提供数量级加速的同时达到了相当的性能。
{"title":"Ultrafast and Controllable Online Motion Retargeting for Game Scenarios","authors":"Tianze Guo, Zhedong Chen, Yi Jiang, Linjun Wu, Xilei Wei, Lang Xu, Yeshuang Lin, He Wang, Xiaogang Jin","doi":"10.1145/3763351","DOIUrl":"https://doi.org/10.1145/3763351","url":null,"abstract":"Geometry-aware online motion retargeting is crucial for real-time character animation in gaming and virtual reality. However, existing methods often rely on complex optimization procedures or deep neural networks, which constrain their applicability in real-time scenarios. Moreover, they offer limited control over fine-grained motion details involved in character interactions, resulting in less realistic outcomes. To overcome these limitations, we propose a novel optimization framework for ultrafast, lightweight motion retargeting with joint-level control (i.e., controls over joint position, bone orientation, etc,). Our approach introduces a semantic-aware objective grounded in a spherical geometry representation, coupled with a bone-length-preserving algorithm that iteratively solves this objective. This formulation preserves spatial relationships among spheres, thereby maintaining motion semantics, mitigating interpenetration, and ensuring contact. It is lightweight and computationally efficient, making it particularly suitable for time-critical real-time deployment scenarios. Additionally, we incorporate a heuristic optimization strategy that enables rapid convergence and precise joint-level control. We evaluate our method against state-of-the-art approaches on the Mixamo dataset, and experimental results demonstrate that it achieves comparable performance while delivering an order-of-magnitude speedup.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"10 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaussian Integral Linear Operators for Precomputed Graphics 预计算图形的高斯积分线性算子
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763321
Haolin Lu, Yash Belhe, Gurprit Singh, Tzu-Mao Li, Toshiya Hachisuka
Integral linear operators play a key role in many graphics problems, but solutions obtained via Monte Carlo methods often suffer from high variance. A common strategy to improve the efficiency of integration across various inputs is to precompute the kernel function. Traditional methods typically rely on basis expansions for both the input and output functions. However, using fixed output bases can restrict the precision of output reconstruction and limit the compactness of the kernel representation. In this work, we introduce a new method that approximates both the kernel and the input function using Gaussian mixtures. This formulation allows the integral operator to be evaluated analytically, leading to improved flexibility in kernel storage and output representation. Moreover, our method naturally supports the sequential application of multiple operators and enables closed-form operator composition, which is particularly beneficial in tasks involving chains of operators. We demonstrate the versatility and effectiveness of our approach across a variety of graphics problems, including environment map relighting, boundary value problems, and fluorescence rendering.
积分线性算子在许多图形问题中起着关键作用,但通过蒙特卡罗方法得到的解往往存在很大的方差。提高不同输入间积分效率的一种常用策略是预先计算核函数。传统方法通常依赖于输入和输出函数的基展开。然而,使用固定的输出基会限制输出重构的精度和核表示的紧凑性。在这项工作中,我们引入了一种使用高斯混合近似核函数和输入函数的新方法。这个公式允许对积分运算符进行分析计算,从而提高了核存储和输出表示的灵活性。此外,我们的方法自然地支持多个操作符的顺序应用,并支持封闭形式的操作符组合,这在涉及操作符链的任务中特别有益。我们展示了我们的方法在各种图形问题上的多功能性和有效性,包括环境地图重照明、边界值问题和荧光渲染。
{"title":"Gaussian Integral Linear Operators for Precomputed Graphics","authors":"Haolin Lu, Yash Belhe, Gurprit Singh, Tzu-Mao Li, Toshiya Hachisuka","doi":"10.1145/3763321","DOIUrl":"https://doi.org/10.1145/3763321","url":null,"abstract":"Integral linear operators play a key role in many graphics problems, but solutions obtained via Monte Carlo methods often suffer from high variance. A common strategy to improve the efficiency of integration across various inputs is to precompute the kernel function. Traditional methods typically rely on basis expansions for both the input and output functions. However, using fixed output bases can restrict the precision of output reconstruction and limit the compactness of the kernel representation. In this work, we introduce a new method that approximates both the kernel and the input function using Gaussian mixtures. This formulation allows the integral operator to be evaluated analytically, leading to improved flexibility in kernel storage and output representation. Moreover, our method naturally supports the sequential application of multiple operators and enables closed-form operator composition, which is particularly beneficial in tasks involving chains of operators. We demonstrate the versatility and effectiveness of our approach across a variety of graphics problems, including environment map relighting, boundary value problems, and fluorescence rendering.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"34 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Glare Pattern Depiction: High-Fidelity Physical Computation and Physiologically-Inspired Visual Response 眩光模式描述:高保真物理计算和生理启发的视觉反应
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763356
Yuxiang Sun, Gladimir V. G. Baranoski
When observing an intense light source, humans perceive dense radiating spikes known as glare/starburst patterns. These patterns are frequently used in computer graphics applications to enhance the perception of brightness (e.g., in games and films). Previous works have computed the physical energy distribution of glare patterns under daytime conditions using approximations like Fresnel diffraction. These techniques are capable of producing visually believable results, particularly when the pupil remains small. However, they are insufficient under nighttime conditions, when the pupil is significantly dilated and the assumptions behind the approximations no longer hold. To address this, we employ the Rayleigh-Sommerfeld diffraction solution, from which Fresnel diffraction is derived as an approximation, as our baseline reference. In pursuit of performance and visual quality, we also employ Ochoa's approximation and the Chirp Z transform to efficiently generate high-resolution results for computer graphics applications. By also taking into account background illumination and certain physiological characteristics of the human photoreceptor cells, particularly the visual threshold of light stimulus, we propose a framework capable of producing plausible visual depictions of glare patterns for both daytime and nighttime scenes.
当观察强光源时,人类感知到密集的辐射尖峰,称为眩光/星暴模式。这些图案经常用于计算机图形应用程序,以增强对亮度的感知(例如,在游戏和电影中)。以前的工作已经使用菲涅耳衍射等近似方法计算了白天条件下眩光模式的物理能量分布。这些技术能够产生视觉上可信的结果,特别是当瞳孔很小的时候。然而,它们在夜间条件下是不够的,当瞳孔显着扩大时,近似背后的假设不再成立。为了解决这个问题,我们采用瑞利-索默菲尔德衍射解决方案,从菲涅耳衍射作为近似推导,作为我们的基准参考。为了追求性能和视觉质量,我们还采用了Ochoa近似和Chirp Z变换来有效地为计算机图形应用程序生成高分辨率结果。同时考虑到背景照明和人类光感受器细胞的某些生理特征,特别是光刺激的视觉阈值,我们提出了一个框架,能够对白天和夜间场景的眩光模式产生合理的视觉描述。
{"title":"Glare Pattern Depiction: High-Fidelity Physical Computation and Physiologically-Inspired Visual Response","authors":"Yuxiang Sun, Gladimir V. G. Baranoski","doi":"10.1145/3763356","DOIUrl":"https://doi.org/10.1145/3763356","url":null,"abstract":"When observing an intense light source, humans perceive dense radiating spikes known as glare/starburst patterns. These patterns are frequently used in computer graphics applications to enhance the perception of brightness (e.g., in games and films). Previous works have computed the physical energy distribution of glare patterns under daytime conditions using approximations like Fresnel diffraction. These techniques are capable of producing visually believable results, particularly when the pupil remains small. However, they are insufficient under nighttime conditions, when the pupil is significantly dilated and the assumptions behind the approximations no longer hold. To address this, we employ the Rayleigh-Sommerfeld diffraction solution, from which Fresnel diffraction is derived as an approximation, as our baseline reference. In pursuit of performance and visual quality, we also employ Ochoa's approximation and the Chirp Z transform to efficiently generate high-resolution results for computer graphics applications. By also taking into account background illumination and certain physiological characteristics of the human photoreceptor cells, particularly the visual threshold of light stimulus, we propose a framework capable of producing plausible visual depictions of glare patterns for both daytime and nighttime scenes.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"155 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1