首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
CrossGen: Learning and Generating Cross Fields for Quad Meshing CrossGen:学习和生成交叉领域的四边形网格
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763299
Qiujie Dong, Jiepeng Wang, Rui Xu, Cheng Lin, Yuan Liu, Shiqing Xin, Zichun Zhong, Xin Li, Changhe Tu, Taku Komura, Leif Kobbelt, Scott Schaefer, Wenping Wang
Cross fields play a critical role in various geometry processing tasks, especially for quad mesh generation. Existing methods for cross field generation often struggle to balance computational efficiency with generation quality, using slow per-shape optimization. We introduce CrossGen , a novel framework that supports both feed-forward prediction and latent generative modeling of cross fields for quad meshing by unifying geometry and cross field representations within a joint latent space. Our method enables extremely fast computation of high-quality cross fields of general input shapes, typically within one second without per-shape optimization. Our method assumes a point-sampled surface, also called a point-cloud surface , as input, so we can accommodate various surface representations by a straightforward point sampling process. Using an auto-encoder network architecture, we encode input point-cloud surfaces into a sparse voxel grid with fine-grained latent spaces, which are decoded into both SDF-based surface geometry and cross fields (see the teaser figure). We also contribute a dataset of models with both high-quality signed distance fields (SDFs) representations and their corresponding cross fields, and use it to train our network. Once trained, the network is capable of computing a cross field of an input surface in a feed-forward manner, ensuring high geometric fidelity, noise resilience, and rapid inference. Furthermore, leveraging the same unified latent representation, we incorporate a diffusion model for computing cross fields of new shapes generated from partial input, such as sketches. To demonstrate its practical applications, we validate CrossGen on the quad mesh generation task for a large variety of surface shapes. Experimental results demonstrate that CrossGen generalizes well across diverse shapes and consistently yields high-fidelity cross fields, thus facilitating the generation of high-quality quad meshes.
交叉场在各种几何处理任务中起着至关重要的作用,特别是在四网格生成中。现有的跨场生成方法往往难以平衡计算效率和生成质量,使用缓慢的每形状优化。我们介绍了CrossGen,这是一个新的框架,通过统一联合潜在空间内的几何和交叉场表示,支持四边形网格交叉场的前馈预测和潜在生成建模。我们的方法可以非常快速地计算一般输入形状的高质量交叉场,通常在一秒钟内,而无需对每个形状进行优化。我们的方法假设一个点采样表面,也称为点云表面,作为输入,因此我们可以通过一个简单的点采样过程来适应各种表面表示。使用自动编码器网络架构,我们将输入点云表面编码为具有细粒度潜在空间的稀疏体素网格,并将其解码为基于sdf的表面几何形状和交叉场(见图)。我们还提供了一个具有高质量签名距离域(sdf)表示及其相应交叉域的模型数据集,并使用它来训练我们的网络。经过训练后,该网络能够以前馈方式计算输入表面的交叉场,从而确保高几何保真度、抗噪声能力和快速推理。此外,利用相同的统一潜在表示,我们结合了一个扩散模型,用于计算由部分输入(如草图)生成的新形状的交叉场。为了演示其实际应用,我们在各种表面形状的四网格生成任务上验证了CrossGen。实验结果表明,CrossGen可以很好地泛化各种形状,并始终产生高保真的交叉场,从而有利于生成高质量的四边形网格。
{"title":"CrossGen: Learning and Generating Cross Fields for Quad Meshing","authors":"Qiujie Dong, Jiepeng Wang, Rui Xu, Cheng Lin, Yuan Liu, Shiqing Xin, Zichun Zhong, Xin Li, Changhe Tu, Taku Komura, Leif Kobbelt, Scott Schaefer, Wenping Wang","doi":"10.1145/3763299","DOIUrl":"https://doi.org/10.1145/3763299","url":null,"abstract":"Cross fields play a critical role in various geometry processing tasks, especially for quad mesh generation. Existing methods for cross field generation often struggle to balance computational efficiency with generation quality, using slow per-shape optimization. We introduce <jats:italic toggle=\"yes\">CrossGen</jats:italic> , a novel framework that supports both feed-forward prediction and latent generative modeling of cross fields for quad meshing by unifying geometry and cross field representations within a joint latent space. Our method enables extremely fast computation of high-quality cross fields of general input shapes, typically within one second without per-shape optimization. Our method assumes a point-sampled surface, also called a <jats:italic toggle=\"yes\">point-cloud surface</jats:italic> , as input, so we can accommodate various surface representations by a straightforward point sampling process. Using an auto-encoder network architecture, we encode input point-cloud surfaces into a sparse voxel grid with fine-grained latent spaces, which are decoded into both SDF-based surface geometry and cross fields (see the teaser figure). We also contribute a dataset of models with both high-quality signed distance fields (SDFs) representations and their corresponding cross fields, and use it to train our network. Once trained, the network is capable of computing a cross field of an input surface in a feed-forward manner, ensuring high geometric fidelity, noise resilience, and rapid inference. Furthermore, leveraging the same unified latent representation, we incorporate a diffusion model for computing cross fields of new shapes generated from partial input, such as sketches. To demonstrate its practical applications, we validate <jats:italic toggle=\"yes\">CrossGen</jats:italic> on the quad mesh generation task for a large variety of surface shapes. Experimental results demonstrate that <jats:italic toggle=\"yes\">CrossGen</jats:italic> generalizes well across diverse shapes and consistently yields high-fidelity cross fields, thus facilitating the generation of high-quality quad meshes.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"26 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Force-Dual Modes: Subspace Design from Stochastic Forces 力-对偶模:随机力的子空间设计
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763310
Otman Benchekroun, Eitan Grinspun, Maurizio Chiaramonte, Philip Allen Etter
Designing subspaces for Reduced Order Modeling (ROM) is crucial for accelerating finite element simulations in graphics and engineering. Unfortunately, it's not always clear which subspace is optimal for arbitrary dynamic simulation. We propose to construct simulation subspaces from force distributions, allowing us to tailor such subspaces to common scene interactions involving constraint penalties, handles-based control, contact and musculoskeletal actuation. To achieve this we adopt a statistical perspective on Reduced Order Modelling, which allows us to push such user-designed force distributions through a linearized simulation to obtain a dual distribution on displacements. To construct our subspace, we then fit a low-rank Gaussian model to this displacement distribution, which we show generalizes Linear Modal Analysis subspaces for uncorrelated unit variance force distributions, as well as Green's Function subspaces for low rank force distributions. We show our framework allows for the construction of subspaces that are optimal both with respect to physical material properties, as well as arbitrary force distributions as observed in handle-based, contact, and musculoskeletal scene interactions.
为降阶建模(ROM)设计子空间对于加速图形学和工程中的有限元仿真是至关重要的。不幸的是,对于任意动态模拟,哪一个子空间是最优的并不总是很清楚。我们建议从力分布构建仿真子空间,允许我们将这些子空间定制为涉及约束惩罚、基于手柄的控制、接触和肌肉骨骼驱动的常见场景交互。为了实现这一点,我们采用了降阶建模的统计角度,这使我们能够通过线性化模拟来推动这种用户设计的力分布,以获得位移的双重分布。为了构建我们的子空间,我们将一个低秩高斯模型拟合到这个位移分布,我们展示了非相关单位方差力分布的广义线性模态分析子空间,以及低秩力分布的格林函数子空间。我们展示了我们的框架允许构建子空间,这些子空间在物理材料特性以及在基于手柄的、接触的和肌肉骨骼场景交互中观察到的任意力分布方面都是最佳的。
{"title":"Force-Dual Modes: Subspace Design from Stochastic Forces","authors":"Otman Benchekroun, Eitan Grinspun, Maurizio Chiaramonte, Philip Allen Etter","doi":"10.1145/3763310","DOIUrl":"https://doi.org/10.1145/3763310","url":null,"abstract":"Designing subspaces for Reduced Order Modeling (ROM) is crucial for accelerating finite element simulations in graphics and engineering. Unfortunately, it's not always clear which subspace is optimal for arbitrary dynamic simulation. We propose to construct simulation subspaces from force distributions, allowing us to tailor such subspaces to common scene interactions involving constraint penalties, handles-based control, contact and musculoskeletal actuation. To achieve this we adopt a statistical perspective on Reduced Order Modelling, which allows us to push such user-designed force distributions through a linearized simulation to obtain a dual distribution on displacements. To construct our subspace, we then fit a low-rank Gaussian model to this displacement distribution, which we show generalizes Linear Modal Analysis subspaces for uncorrelated unit variance force distributions, as well as Green's Function subspaces for low rank force distributions. We show our framework allows for the construction of subspaces that are optimal both with respect to physical material properties, as well as arbitrary force distributions as observed in handle-based, contact, and musculoskeletal scene interactions.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"110 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMF: Template-free and Rig-free Animation Transfer using Kinetic Codes SMF:使用动态代码的无模板和无rig动画传输
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763309
Sanjeev Muralikrishnan, Niladri Shekhar Dutt, Niloy J. Mitra
Animation retargetting applies sparse motion description (e.g., keypoint sequences) to a character mesh to produce a semantically plausible and temporally coherent full-body mesh sequence. Existing approaches come with restrictions - they require access to template-based shape priors or artist-designed deformation rigs, suffer from limited generalization to unseen motion and/or shapes, or exhibit motion jitter. We propose Self-supervised Motion Fields (SMF), a self-supervised framework that is trained with only sparse motion representations, without requiring dataset-specific annotations, templates, or rigs. At the heart of our method are Kinetic Codes, a novel autoencoder-based sparse motion encoding, that exposes a semantically rich latent space, simplifying large-scale training. Our architecture comprises dedicated spatial and temporal gradient predictors, which are jointly trained in an end-to-end fashion. The combined network, regularized by the Kinetic Codes' latent space, has good generalization across both unseen shapes and new motions. We evaluated our method on unseen motion sampled from AMASS, D4D, Mixamo, and raw monocular video for animation transfer on various characters with varying shapes and topology. We report a new SoTA on the AMASS dataset in the context of generalization to unseen motion.
动画重定向将稀疏运动描述(例如关键点序列)应用于角色网格,以产生语义上合理且时间上连贯的全身网格序列。现有的方法有一些限制——它们需要访问基于模板的形状先验或艺术家设计的变形钻机,对看不见的运动和/或形状的泛化有限,或者表现出运动抖动。我们提出了自监督运动场(SMF),这是一种仅使用稀疏运动表示进行训练的自监督框架,不需要特定于数据集的注释、模板或钻机。我们的方法的核心是动态编码,一种新的基于自编码器的稀疏运动编码,它暴露了一个语义丰富的潜在空间,简化了大规模训练。我们的架构包括专用的空间和时间梯度预测器,它们以端到端方式联合训练。该组合网络通过运动码潜空间的正则化,对未见形状和新运动都有很好的泛化效果。我们评估了从AMASS, D4D, Mixamo和原始单目视频中采样的看不见的运动方法,用于对具有不同形状和拓扑的各种字符进行动画传输。我们在概化到看不见的运动的背景下,在AMASS数据集上报告了一个新的SoTA。
{"title":"SMF: Template-free and Rig-free Animation Transfer using Kinetic Codes","authors":"Sanjeev Muralikrishnan, Niladri Shekhar Dutt, Niloy J. Mitra","doi":"10.1145/3763309","DOIUrl":"https://doi.org/10.1145/3763309","url":null,"abstract":"Animation retargetting applies sparse motion description (e.g., keypoint sequences) to a character mesh to produce a semantically plausible and temporally coherent full-body mesh sequence. Existing approaches come with restrictions - they require access to template-based shape priors or artist-designed deformation rigs, suffer from limited generalization to unseen motion and/or shapes, or exhibit motion jitter. We propose Self-supervised Motion Fields (SMF), a self-supervised framework that is trained with only sparse motion representations, without requiring dataset-specific annotations, templates, or rigs. At the heart of our method are Kinetic Codes, a novel autoencoder-based sparse motion encoding, that exposes a semantically rich latent space, simplifying large-scale training. Our architecture comprises dedicated spatial and temporal gradient predictors, which are jointly trained in an end-to-end fashion. The combined network, regularized by the Kinetic Codes' latent space, has good generalization across both unseen shapes and new motions. We evaluated our method on unseen motion sampled from AMASS, D4D, Mixamo, and raw monocular video for animation transfer on various characters with varying shapes and topology. We report a new SoTA on the AMASS dataset in the context of generalization to unseen motion.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"203 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MALeR: Improving Compositional Fidelity in Layout-Guided Generation 在布局引导生成中提高构图保真度
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763341
Shivank Saxena, Dhruv Srivastava, Makarand Tapaswi
Recent advances in text-to-image models have enabled a new era of creative and controllable image generation. However, generating compositional scenes with multiple subjects and attributes remains a significant challenge. To enhance user control over subject placement, several layout-guided methods have been proposed. However, these methods face numerous challenges, particularly in compositional scenes. Unintended subjects often appear outside the layouts, generated images can be out-of-distribution and contain unnatural artifacts, or attributes bleed across subjects, leading to incorrect visual outputs. In this work, we propose MALeR, a method that addresses each of these challenges. Given a text prompt and corresponding layouts, our method prevents subjects from appearing outside the given layouts while being in-distribution. Additionally, we propose a masked, attribute-aware binding mechanism that prevents attribute leakage, enabling accurate rendering of subjects with multiple attributes, even in complex compositional scenes. Qualitative and quantitative evaluation demonstrates that our method achieves superior performance in compositional accuracy, generation consistency, and attribute binding compared to previous work. MALeR is particularly adept at generating images of scenes with multiple subjects and multiple attributes per subject.
文本到图像模型的最新进展开创了一个创造性和可控图像生成的新时代。然而,生成具有多个主题和属性的合成场景仍然是一个重大挑战。为了增强用户对受试者放置的控制,提出了几种布局引导方法。然而,这些方法面临着许多挑战,特别是在合成场景中。意外的主题经常出现在布局之外,生成的图像可能不在分布范围内,并且包含不自然的工件,或者跨主题的属性流血,导致不正确的视觉输出。在这项工作中,我们提出了MALeR,一种解决这些挑战的方法。给定文本提示符和相应的布局,我们的方法可以防止主题在分发时出现在给定布局之外。此外,我们提出了一种屏蔽的、属性感知的绑定机制,该机制可以防止属性泄漏,即使在复杂的合成场景中,也可以准确地呈现具有多个属性的主题。定性和定量评价表明,该方法在合成精度、生成一致性和属性绑定等方面都取得了较好的效果。MALeR特别擅长生成具有多个主题和每个主题多个属性的场景图像。
{"title":"MALeR: Improving Compositional Fidelity in Layout-Guided Generation","authors":"Shivank Saxena, Dhruv Srivastava, Makarand Tapaswi","doi":"10.1145/3763341","DOIUrl":"https://doi.org/10.1145/3763341","url":null,"abstract":"Recent advances in text-to-image models have enabled a new era of creative and controllable image generation. However, generating compositional scenes with multiple subjects and attributes remains a significant challenge. To enhance user control over subject placement, several layout-guided methods have been proposed. However, these methods face numerous challenges, particularly in compositional scenes. Unintended subjects often appear outside the layouts, generated images can be out-of-distribution and contain unnatural artifacts, or attributes bleed across subjects, leading to incorrect visual outputs. In this work, we propose MALeR, a method that addresses each of these challenges. Given a text prompt and corresponding layouts, our method prevents subjects from appearing outside the given layouts while being in-distribution. Additionally, we propose a masked, attribute-aware binding mechanism that prevents attribute leakage, enabling accurate rendering of subjects with multiple attributes, even in complex compositional scenes. Qualitative and quantitative evaluation demonstrates that our method achieves superior performance in compositional accuracy, generation consistency, and attribute binding compared to previous work. MALeR is particularly adept at generating images of scenes with multiple subjects and multiple attributes per subject.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"29 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Marching Neurons: Accurate Surface Extraction for Neural Implicit Shapes 行进神经元:神经隐式形状的精确表面提取
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763328
Christian Stippel, Felix Mujkanovic, Thomas Leimkühler, Pedro Hermosilla
Accurate surface geometry representation is crucial in 3D visual computing. Explicit representations, such as polygonal meshes, and implicit representations, like signed distance functions, each have distinct advantages, making efficient conversions between them increasingly important. Conventional surface extraction methods for implicit representations, such as the widely used Marching Cubes algorithm, rely on spatial decomposition and sampling, leading to inaccuracies due to fixed and limited resolution. We introduce a novel approach for analytically extracting surfaces from neural implicit functions. Our method operates natively in parallel and can navigate large neural architectures. By leveraging the fact that each neuron partitions the domain, we develop a depth-first traversal strategy to efficiently track the encoded surface. The resulting meshes faithfully capture the full geometric information from the network without ad-hoc spatial discretization, achieving unprecedented accuracy across diverse shapes and network architectures while maintaining competitive speed.
精确的曲面几何表示是三维视觉计算的关键。显式表示(如多边形网格)和隐式表示(如带符号距离函数)各有各自的优势,这使得它们之间的有效转换变得越来越重要。传统的隐式表示的表面提取方法,如广泛使用的Marching Cubes算法,依赖于空间分解和采样,由于固定和有限的分辨率而导致不准确。提出了一种从神经隐式函数中解析提取曲面的新方法。我们的方法本身是并行的,可以导航大型神经结构。通过利用每个神经元划分域的事实,我们开发了一种深度优先遍历策略来有效地跟踪编码表面。生成的网格忠实地捕获了来自网络的完整几何信息,没有特别的空间离散化,在保持竞争速度的同时,在不同形状和网络架构中实现了前所未有的精度。
{"title":"Marching Neurons: Accurate Surface Extraction for Neural Implicit Shapes","authors":"Christian Stippel, Felix Mujkanovic, Thomas Leimkühler, Pedro Hermosilla","doi":"10.1145/3763328","DOIUrl":"https://doi.org/10.1145/3763328","url":null,"abstract":"Accurate surface geometry representation is crucial in 3D visual computing. Explicit representations, such as polygonal meshes, and implicit representations, like signed distance functions, each have distinct advantages, making efficient conversions between them increasingly important. Conventional surface extraction methods for implicit representations, such as the widely used Marching Cubes algorithm, rely on spatial decomposition and sampling, leading to inaccuracies due to fixed and limited resolution. We introduce a novel approach for analytically extracting surfaces from neural implicit functions. Our method operates natively in parallel and can navigate large neural architectures. By leveraging the fact that each neuron partitions the domain, we develop a depth-first traversal strategy to efficiently track the encoded surface. The resulting meshes faithfully capture the full geometric information from the network without ad-hoc spatial discretization, achieving unprecedented accuracy across diverse shapes and network architectures while maintaining competitive speed.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"33 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Jackknife Transmittance and MIS Weight Estimation 折刀透光率和MIS权重估计
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763273
Christoph Peters
A core operation in Monte Carlo volume rendering is transmittance estimation: Given a segment along a ray, the goal is to estimate the fraction of light that will pass through this segment without encountering absorption or out-scattering. A naive approach is to estimate optical depth τ using unbiased ray marching and to then use exp(-τ) as transmittance estimate. However, this strategy systematically overestimates transmittance due to Jensen's inequality. On the other hand, existing unbiased transmittance estimators either suffer from high variance or have a cost governed by random decisions, which makes them less suitable for SIMD architectures. We propose a biased transmittance estimator with significantly reduced bias compared to the naive approach and a deterministic and low cost. We observe that ray marching with stratified jittered sampling results in estimates of optical depth that are nearly normal-distributed. We then apply the unique minimum variance unbiased (UMVU) estimator of exp(- τ ) based on two such estimates (using two different sets of random numbers). Bias only arises from violations of the assumption of normal-distributed inputs. We further reduce bias and variance using a variance-aware importance sampling scheme. The underlying theory can be used to estimate any analytic function of optical depth. We use this generalization to estimate multiple importance sampling (MIS) weights and introduce two integrators: Unbiased MIS with biased MIS weights and a more efficient but biased combination of MIS and transmittance estimation.
蒙特卡罗体绘制中的一个核心操作是透光率估计:给定沿射线的一段,目标是估计在不遇到吸收或向外散射的情况下通过该段的光的比例。一种朴素的方法是使用无偏射线推进估计光学深度τ,然后使用exp(-τ)作为透射率估计。然而,由于詹森不等式,这种策略系统地高估了透光率。另一方面,现有的无偏透射率估计器要么存在高方差,要么有随机决策控制的成本,这使得它们不太适合SIMD体系结构。我们提出了一种偏差透射率估计器,与原始方法相比,偏差显著降低,并且具有确定性和低成本。我们观察到,分层抖动采样的射线行军结果光学深度估计接近正态分布。然后,我们应用基于两个这样的估计(使用两组不同的随机数)的唯一最小方差无偏(UMVU)估计量exp(- τ)。偏差只产生于违反正态分布输入的假设。我们进一步减少偏差和方差使用方差感知的重要性抽样方案。其基础理论可用于估计任何光学深度的解析函数。我们使用这种泛化来估计多重重要抽样(MIS)的权重,并引入两个积分器:具有有偏MIS权重的无偏MIS和更有效但有偏的MIS和透射率估计的组合。
{"title":"Jackknife Transmittance and MIS Weight Estimation","authors":"Christoph Peters","doi":"10.1145/3763273","DOIUrl":"https://doi.org/10.1145/3763273","url":null,"abstract":"A core operation in Monte Carlo volume rendering is transmittance estimation: Given a segment along a ray, the goal is to estimate the fraction of light that will pass through this segment without encountering absorption or out-scattering. A naive approach is to estimate optical depth τ using unbiased ray marching and to then use exp(-τ) as transmittance estimate. However, this strategy systematically overestimates transmittance due to Jensen's inequality. On the other hand, existing unbiased transmittance estimators either suffer from high variance or have a cost governed by random decisions, which makes them less suitable for SIMD architectures. We propose a biased transmittance estimator with significantly reduced bias compared to the naive approach and a deterministic and low cost. We observe that ray marching with stratified jittered sampling results in estimates of optical depth that are nearly normal-distributed. We then apply the unique minimum variance unbiased (UMVU) estimator of exp(- <jats:italic toggle=\"yes\">τ</jats:italic> ) based on two such estimates (using two different sets of random numbers). Bias only arises from violations of the assumption of normal-distributed inputs. We further reduce bias and variance using a variance-aware importance sampling scheme. The underlying theory can be used to estimate any analytic function of optical depth. We use this generalization to estimate multiple importance sampling (MIS) weights and introduce two integrators: Unbiased MIS with biased MIS weights and a more efficient but biased combination of MIS and transmittance estimation.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"155 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction MILo:网格在环高斯溅射详细和有效的表面重建
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763339
Antoine Guédon, Diego Gomez, Nissim Maruani, Bingchen Gong, George Drettakis, Maks Ovsjanikov
While recent advances in Gaussian Splatting have enabled fast reconstruction of high-quality 3D scenes from images, extracting accurate surface meshes remains a challenge. Current approaches extract the surface through costly post-processing steps, resulting in the loss of fine geometric details or requiring significant time and leading to very dense meshes with millions of vertices. More fundamentally, the a posteriori conversion from a volumetric to a surface representation limits the ability of the final mesh to preserve all geometric structures captured during training. We present MILo, a novel Gaussian Splatting framework that bridges the gap between volumetric and surface representations by differentiably extracting a mesh from the 3D Gaussians. We design a fully differentiable procedure that constructs the mesh—including both vertex locations and connectivity—at every iteration directly from the parameters of the Gaussians, which are the only quantities optimized during training. Our method introduces three key technical contributions: (1) a bidirectional consistency framework ensuring both representations—Gaussians and the extracted mesh—capture the same underlying geometry during training; (2) an adaptive mesh extraction process performed at each training iteration, which uses Gaussians as differentiable pivots for Delaunay triangulation; (3) a novel method for computing signed distance values from the 3D Gaussians that enables precise surface extraction while avoiding geometric erosion. Our approach can reconstruct complete scenes, including backgrounds, with state-of-the-art quality while requiring an order of magnitude fewer mesh vertices than previous methods. Due to their light weight and empty interior, our meshes are well suited for downstream applications such as physics simulations and animation. The code for our approach and an online gallery are available at https://anttwo.github.io/milo/.
虽然高斯飞溅技术的最新进展已经能够从图像中快速重建高质量的3D场景,但提取准确的表面网格仍然是一个挑战。目前的方法通过昂贵的后处理步骤提取表面,导致丢失精细的几何细节或需要大量时间,并导致具有数百万个顶点的非常密集的网格。更根本的是,从体积表示到表面表示的后验转换限制了最终网格保留训练期间捕获的所有几何结构的能力。我们提出了一种新的高斯飞溅框架MILo,它通过从三维高斯图像中可微分地提取网格,弥合了体积和表面表示之间的差距。我们设计了一个完全可微的过程,在每次迭代中直接从高斯参数构建网格,包括顶点位置和连通性,这是训练期间唯一优化的量。我们的方法引入了三个关键的技术贡献:(1)一个双向一致性框架,确保两个表示-高斯和提取的网格-在训练期间捕获相同的底层几何;(2)在每次训练迭代中进行自适应网格提取过程,该过程使用高斯函数作为Delaunay三角剖分的可微点;(3)从三维高斯分布计算符号距离值的新方法,在避免几何侵蚀的同时实现精确的表面提取。我们的方法可以重建完整的场景,包括背景,具有最先进的质量,同时需要比以前的方法少一个数量级的网格顶点。由于它们的重量轻,内部空,我们的网格非常适合下游应用,如物理模拟和动画。我们的方法和在线画廊的代码可在https://anttwo.github.io/milo/上获得。
{"title":"MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction","authors":"Antoine Guédon, Diego Gomez, Nissim Maruani, Bingchen Gong, George Drettakis, Maks Ovsjanikov","doi":"10.1145/3763339","DOIUrl":"https://doi.org/10.1145/3763339","url":null,"abstract":"While recent advances in Gaussian Splatting have enabled fast reconstruction of high-quality 3D scenes from images, extracting accurate surface meshes remains a challenge. Current approaches extract the surface through costly post-processing steps, resulting in the loss of fine geometric details or requiring significant time and leading to very dense meshes with millions of vertices. More fundamentally, the <jats:italic toggle=\"yes\">a posteriori</jats:italic> conversion from a volumetric to a surface representation limits the ability of the final mesh to preserve all geometric structures captured during training. We present MILo, a novel Gaussian Splatting framework that bridges the gap between volumetric and surface representations by differentiably extracting a mesh from the 3D Gaussians. We design a fully differentiable procedure that constructs the mesh—including both vertex locations and connectivity—at every iteration directly from the parameters of the Gaussians, <jats:italic toggle=\"yes\">which are the only quantities optimized during training.</jats:italic> Our method introduces three key technical contributions: (1) a bidirectional consistency framework ensuring both representations—Gaussians and the extracted mesh—capture the same underlying geometry during training; (2) an adaptive mesh extraction process performed at each training iteration, which uses Gaussians as differentiable pivots for Delaunay triangulation; (3) a novel method for computing signed distance values from the 3D Gaussians that enables precise surface extraction while avoiding geometric erosion. Our approach can reconstruct complete scenes, including backgrounds, with state-of-the-art quality while requiring an order of magnitude fewer mesh vertices than previous methods. Due to their light weight and empty interior, our meshes are well suited for downstream applications such as physics simulations and animation. The code for our approach and an online gallery are available at https://anttwo.github.io/milo/.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"55 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imaginarium: Vision-guided High-Quality 3D Scene Layout Generation Imaginarium:视觉引导的高质量3D场景布局生成
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763353
Xiaoming Zhu, Xu Huang, Qinghongbing Xie, Zhi Deng, Junsheng Yu, Yirui Guan, Zhongyuan Liu, Lin Zhu, Qijun Zhao, Ligang Liu, Long Zeng
Generating artistic and coherent 3D scene layouts is crucial in digital content creation. Traditional optimization-based methods are often constrained by cumbersome manual rules, while deep generative models face challenges in producing content with richness and diversity. Furthermore, approaches that utilize large language models frequently lack robustness and fail to accurately capture complex spatial relationships. To address these challenges, this paper presents a novel vision-guided 3D layout generation system. We first construct a high-quality asset library containing 2,037 scene assets and 147 3D scene layouts. Subsequently, we employ an image generation model to expand prompt representations into images, fine-tuning it to align with our asset library. We then develop a robust image parsing module to recover the 3D layout of scenes based on visual semantics and geometric information. Finally, we optimize the scene layout using scene graphs and overall visual semantics to ensure logical coherence and alignment with the images. Extensive user testing demonstrates that our algorithm significantly outperforms existing methods in terms of layout richness and quality. The code and dataset will be available at https://github.com/HiHiAllen/Imaginarium.
生成艺术和连贯的3D场景布局在数字内容创作中至关重要。传统的基于优化的方法往往受到繁琐的人工规则的约束,而深度生成模型在生成丰富多样的内容方面面临挑战。此外,利用大型语言模型的方法往往缺乏鲁棒性,无法准确捕获复杂的空间关系。为了解决这些问题,本文提出了一种新的视觉引导三维布局生成系统。我们首先构建一个包含2037个场景资产和147个3D场景布局的高质量资源库。随后,我们使用图像生成模型将提示表示扩展到图像中,并对其进行微调以与我们的资源库保持一致。然后,我们开发了一个鲁棒的图像解析模块,以恢复基于视觉语义和几何信息的场景三维布局。最后,我们使用场景图和整体视觉语义来优化场景布局,以确保与图像的逻辑一致性和对齐。大量的用户测试表明,我们的算法在布局丰富度和质量方面明显优于现有的方法。代码和数据集可在https://github.com/HiHiAllen/Imaginarium上获得。
{"title":"Imaginarium: Vision-guided High-Quality 3D Scene Layout Generation","authors":"Xiaoming Zhu, Xu Huang, Qinghongbing Xie, Zhi Deng, Junsheng Yu, Yirui Guan, Zhongyuan Liu, Lin Zhu, Qijun Zhao, Ligang Liu, Long Zeng","doi":"10.1145/3763353","DOIUrl":"https://doi.org/10.1145/3763353","url":null,"abstract":"Generating artistic and coherent 3D scene layouts is crucial in digital content creation. Traditional optimization-based methods are often constrained by cumbersome manual rules, while deep generative models face challenges in producing content with richness and diversity. Furthermore, approaches that utilize large language models frequently lack robustness and fail to accurately capture complex spatial relationships. To address these challenges, this paper presents a novel vision-guided 3D layout generation system. We first construct a high-quality asset library containing 2,037 scene assets and 147 3D scene layouts. Subsequently, we employ an image generation model to expand prompt representations into images, fine-tuning it to align with our asset library. We then develop a robust image parsing module to recover the 3D layout of scenes based on visual semantics and geometric information. Finally, we optimize the scene layout using scene graphs and overall visual semantics to ensure logical coherence and alignment with the images. Extensive user testing demonstrates that our algorithm significantly outperforms existing methods in terms of layout richness and quality. The code and dataset will be available at https://github.com/HiHiAllen/Imaginarium.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"5 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MiGumi: Making Tightly Coupled Integral Joints Millable miumi:使紧密耦合的整体关节可铣削
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763304
Aditya Ganeshan, Kurt Fleischer, Wenzel Jakob, Ariel Shamir, Daniel Ritchie, Takeo Igarashi, Maria Larsson
Traditional integral wood joints, despite their strength, durability, and elegance, remain rare in modern workflows due to the cost and difficulty of manual fabrication. CNC milling offers a scalable alternative, but directly milling traditional joints often fails to produce functional results because milling induces geometric deviations—such as rounded inner corners—that alter the target geometries of the parts. Since joints rely on tightly fitting surfaces, such deviations introduce gaps or overlaps that undermine fit or block assembly. We propose to overcome this problem by (1) designing a language that represent millable geometry, and (2) co-optimizing part geometries to restore coupling. We introduce Millable Extrusion Geometry (MXG), a language for representing geometry as the outcome of milling operations performed with flat-end drill bits. MXG represents each operation as a subtractive extrusion volume defined by a tool direction and drill radius. This parameterization enables the modeling of artifact-free geometry under an idealized zero-radius drill bit, matching traditional joint designs. Increasing the radius then reveals milling-induced deviations, which compromise the integrity of the joint. To restore coupling, we formalize tight coupling in terms of both surface proximity and proximity constraints on the mill-bit paths associated with mating surfaces. We then derive two tractable, differentiable losses that enable efficient optimization of joint geometry. We evaluate our method on 30 traditional joint designs, demonstrating that it produces CNC-compatible, tightly fitting joints that approximates the original geometry. By reinterpreting traditional joints for CNC workflows, we continue the evolution of this heritage craft and help ensure its relevance in future making practices.
传统的整体木接头,尽管它们的强度,耐用性和优雅,在现代工作流程中仍然很少见,因为手工制造的成本和难度。数控铣削提供了一种可扩展的替代方案,但直接铣削传统关节通常无法产生功能结果,因为铣削会引起几何偏差,例如圆角内角,从而改变零件的目标几何形状。由于接头依赖于紧密配合的表面,这种偏差会导致间隙或重叠,从而破坏配合或块装配。为了克服这个问题,我们建议(1)设计一种表示可切割几何的语言,(2)共同优化零件几何以恢复耦合。我们介绍了Millable Extrusion Geometry (MXG),这是一种表示平面钻头铣削操作结果的几何语言。MXG将每次操作表示为由工具方向和钻孔半径定义的减挤压体积。这种参数化可以在理想的零半径钻头下实现无工件几何形状的建模,与传统的接头设计相匹配。然后,增加半径会暴露铣削引起的偏差,从而损害关节的完整性。为了恢复耦合,我们根据与配合面相关的钻头路径的表面接近性和接近性约束将紧密耦合形式化。然后,我们推导了两个可处理的,可微分的损失,使有效的优化关节几何。我们在30个传统关节设计上评估了我们的方法,证明它产生了与cnc兼容的紧密配合的关节,接近原始几何形状。通过重新诠释CNC工作流程的传统关节,我们继续发展这一传统工艺,并帮助确保其在未来的制作实践中的相关性。
{"title":"MiGumi: Making Tightly Coupled Integral Joints Millable","authors":"Aditya Ganeshan, Kurt Fleischer, Wenzel Jakob, Ariel Shamir, Daniel Ritchie, Takeo Igarashi, Maria Larsson","doi":"10.1145/3763304","DOIUrl":"https://doi.org/10.1145/3763304","url":null,"abstract":"Traditional integral wood joints, despite their strength, durability, and elegance, remain rare in modern workflows due to the cost and difficulty of manual fabrication. CNC milling offers a scalable alternative, but directly milling traditional joints often fails to produce functional results because milling induces geometric deviations—such as rounded inner corners—that alter the target geometries of the parts. Since joints rely on tightly fitting surfaces, such deviations introduce gaps or overlaps that undermine fit or block assembly. We propose to overcome this problem by (1) designing a language that represent millable geometry, and (2) co-optimizing part geometries to restore coupling. We introduce Millable Extrusion Geometry (MXG), a language for representing geometry as the outcome of milling operations performed with flat-end drill bits. MXG represents each operation as a subtractive extrusion volume defined by a tool direction and drill radius. This parameterization enables the modeling of artifact-free geometry under an idealized zero-radius drill bit, matching traditional joint designs. Increasing the radius then reveals milling-induced deviations, which compromise the integrity of the joint. To restore coupling, we formalize tight coupling in terms of both surface proximity and proximity constraints on the mill-bit paths associated with mating surfaces. We then derive two tractable, differentiable losses that enable efficient optimization of joint geometry. We evaluate our method on 30 traditional joint designs, demonstrating that it produces CNC-compatible, tightly fitting joints that approximates the original geometry. By reinterpreting traditional joints for CNC workflows, we continue the evolution of this heritage craft and help ensure its relevance in future making practices.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
INF-3DP: Implicit Neural Fields for Collision-Free Multi-Axis 3D Printing INF-3DP:用于无碰撞多轴3D打印的隐式神经场
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763354
Jiasheng Qu, Zhuo Huang, Dezhao Guo, Hailin Sun, Aoran Lyu, Chengkai Dai, Yeung Yam, Guoxin Fang
We introduce a general, scalable computational framework for multi-axis 3D printing based on implicit neural fields (INFs) that unifies all stages of tool-path generation and global collision-free motion planning. In our pipeline, input models are represented as signed distance fields, with fabrication objectives—such as support-free printing, surface finish quality, and extrusion control—directly encoded in the optimization of an implicit guidance field. This unified approach enables toolpath optimization across both surface and interior domains, allowing shell and infill paths to be generated via implicit field interpolation. The printing sequence and multi-axis motion are then jointly optimized over a continuous quaternion field. Our continuous formulation constructs the evolving printing object as a time-varying SDF, supporting differentiable global collision handling throughout INF-based motion planning. Compared to explicit-representation-based methods, INF-3DP achieves up to two orders of magnitude speedup and significantly reduces waypoint-to-surface error. We validate our framework on diverse, complex models and demonstrate its efficiency with physical fabrication experiments using a robot-assisted multi-axis system.
我们介绍了一种通用的、可扩展的多轴3D打印计算框架,该框架基于隐式神经场(inf),将刀具路径生成和全局无碰撞运动规划的所有阶段统一起来。在我们的流水线中,输入模型被表示为带符号的距离字段,制造目标(如无支撑打印、表面光洁度质量和挤出控制)直接编码在隐式指导字段的优化中。这种统一的方法可以实现跨表面和内部域的刀具路径优化,允许通过隐式场插值生成外壳和填充路径。然后在连续四元数场上联合优化打印顺序和多轴运动。我们的连续公式将不断变化的打印对象构建为时变的SDF,在基于inf的运动规划中支持可微分的全局碰撞处理。与基于显式表示的方法相比,INF-3DP实现了高达两个数量级的加速,并显着减少了航路点到地面的误差。我们在各种复杂的模型上验证了我们的框架,并通过机器人辅助多轴系统的物理制造实验证明了它的效率。
{"title":"INF-3DP: Implicit Neural Fields for Collision-Free Multi-Axis 3D Printing","authors":"Jiasheng Qu, Zhuo Huang, Dezhao Guo, Hailin Sun, Aoran Lyu, Chengkai Dai, Yeung Yam, Guoxin Fang","doi":"10.1145/3763354","DOIUrl":"https://doi.org/10.1145/3763354","url":null,"abstract":"We introduce a general, scalable computational framework for multi-axis 3D printing based on implicit neural fields (INFs) that unifies all stages of tool-path generation and global collision-free motion planning. In our pipeline, input models are represented as signed distance fields, with fabrication objectives—such as support-free printing, surface finish quality, and extrusion control—directly encoded in the optimization of an implicit guidance field. This unified approach enables toolpath optimization across both surface and interior domains, allowing shell and infill paths to be generated via implicit field interpolation. The printing sequence and multi-axis motion are then jointly optimized over a continuous quaternion field. Our continuous formulation constructs the evolving printing object as a time-varying SDF, supporting differentiable global collision handling throughout INF-based motion planning. Compared to explicit-representation-based methods, INF-3DP achieves up to two orders of magnitude speedup and significantly reduces waypoint-to-surface error. We validate our framework on diverse, complex models and demonstrate its efficiency with physical fabrication experiments using a robot-assisted multi-axis system.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"20 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1