首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
Lightweight, Edge-Aware, and Temporally Consistent Supersampling for Mobile Real-Time Rendering 用于移动实时渲染的轻量级、边缘感知和时间一致的超采样
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763348
Sipeng Yang, Jiayu Ji, Junhao Zhuge, Jinzhe Zhao, Qiang Qiu, Chen Li, Yuzhong Yan, Kerong Wang, Lingqi Yan, Xiaogang Jin
Supersampling has proven highly effective in enhancing visual fidelity by reducing aliasing, increasing resolution, and generating interpolated frames. It has become a standard component of modern real-time rendering pipelines. However, on mobile platforms, deep learning-based supersampling methods remain impractical due to stringent hardware constraints, while non-neural supersampling techniques often fall short in delivering perceptually high-quality results. In particular, producing visually pleasing reconstructions and temporally coherent interpolations is still a significant challenge in mobile settings. In this work, we present a novel, lightweight supersampling framework tailored for mobile devices. Our approach substantially improves both image reconstruction quality and temporal consistency while maintaining real-time performance. For super-resolution, we propose an intra-pixel object coverage estimation method for reconstructing high-quality anti-aliased pixels in edge regions, a gradient-guided strategy for non-edge areas, and a temporal sample accumulation approach to improve overall image quality. For frame interpolation, we develop an efficient motion estimation module coupled with a lightweight fusion scheme that integrates both estimated optical flow and rendered motion vectors, enabling temporally coherent interpolation of object dynamics and lighting variations. Extensive experiments demonstrate that our method consistently outperforms existing baselines in both perceptual image quality and temporal smoothness, while maintaining real-time performance on mobile GPUs. A demo application and supplementary materials are available on the project page.
通过减少混叠、提高分辨率和生成插值帧,超采样已被证明在增强视觉保真度方面非常有效。它已经成为现代实时渲染管道的标准组件。然而,在移动平台上,由于严格的硬件限制,基于深度学习的超采样方法仍然不切实际,而非神经超采样技术往往无法提供高质量的感知结果。特别是,在移动环境中,产生视觉上令人愉悦的重建和时间上连贯的插值仍然是一个重大挑战。在这项工作中,我们提出了一种针对移动设备量身定制的新型轻量级超采样框架。我们的方法在保持实时性能的同时大大提高了图像重建质量和时间一致性。对于超分辨率,我们提出了用于在边缘区域重建高质量抗混叠像素的像素内目标覆盖估计方法,用于非边缘区域的梯度引导策略,以及用于提高整体图像质量的时间样本积累方法。对于帧插值,我们开发了一个高效的运动估计模块,并结合了一个轻量级的融合方案,该方案集成了估计的光流和渲染的运动向量,从而实现了物体动力学和光照变化的时间相干插值。大量的实验表明,我们的方法在感知图像质量和时间平滑性方面始终优于现有的基线,同时在移动gpu上保持实时性能。演示应用程序和补充材料可在项目页面上获得。
{"title":"Lightweight, Edge-Aware, and Temporally Consistent Supersampling for Mobile Real-Time Rendering","authors":"Sipeng Yang, Jiayu Ji, Junhao Zhuge, Jinzhe Zhao, Qiang Qiu, Chen Li, Yuzhong Yan, Kerong Wang, Lingqi Yan, Xiaogang Jin","doi":"10.1145/3763348","DOIUrl":"https://doi.org/10.1145/3763348","url":null,"abstract":"Supersampling has proven highly effective in enhancing visual fidelity by reducing aliasing, increasing resolution, and generating interpolated frames. It has become a standard component of modern real-time rendering pipelines. However, on mobile platforms, deep learning-based supersampling methods remain impractical due to stringent hardware constraints, while non-neural supersampling techniques often fall short in delivering perceptually high-quality results. In particular, producing visually pleasing reconstructions and temporally coherent interpolations is still a significant challenge in mobile settings. In this work, we present a novel, lightweight supersampling framework tailored for mobile devices. Our approach substantially improves both image reconstruction quality and temporal consistency while maintaining real-time performance. For super-resolution, we propose an intra-pixel object coverage estimation method for reconstructing high-quality anti-aliased pixels in edge regions, a gradient-guided strategy for non-edge areas, and a temporal sample accumulation approach to improve overall image quality. For frame interpolation, we develop an efficient motion estimation module coupled with a lightweight fusion scheme that integrates both estimated optical flow and rendered motion vectors, enabling temporally coherent interpolation of object dynamics and lighting variations. Extensive experiments demonstrate that our method consistently outperforms existing baselines in both perceptual image quality and temporal smoothness, while maintaining real-time performance on mobile GPUs. A demo application and supplementary materials are available on the project page.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"125 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CFC: Simulating Character-Fluid Coupling using a Two-Level World Model 用两级世界模型模拟字符-流体耦合
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763318
Zhiyang Dou, Chen Peng, Xinyu Lu, Xiaohan Ye, Lixing Fang, Yuan Liu, Wenping Wang, Chuang Gan, Lingjie Liu, Taku Komura
Humans possess the ability to master a wide range of motor skills, enabling them to quickly and flexibly adapt to the surrounding environment. Despite recent progress in replicating such versatile human motor skills, existing research often oversimplifies or inadequately captures the complex interplay between human body movements and highly dynamic environments, such as interactions with fluids. In this paper, we present a world model for Character-Fluid Coupling (CFC) for simulating human-fluid interactions via two-way coupling. We introduce a two-level world model which consists of a Physics-Informed Neural Network (PINN)-based model for fluid dynamics and a character world model capturing body dynamics under various external forces. This two-level world model adeptly predicts the dynamics of fluid and its influence on rigid bodies via force prediction, sidestepping the computational burden of fluid simulation and providing policy gradients for efficient policy training. Once trained, our system can control characters to complete high-level tasks while adaptively responding to environmental changes. We also present that the fluid initiates emergent behaviors of the characters, enhancing motion diversity and interactivity. Extensive experiments underscore the effectiveness of CFC, demonstrating its ability to produce high-quality, realistic human-fluid interaction animations.
人类拥有掌握各种运动技能的能力,使他们能够快速灵活地适应周围的环境。尽管最近在复制这种多用途的人类运动技能方面取得了进展,但现有的研究往往过于简化或不充分地捕捉到人体运动与高度动态环境(如与流体的相互作用)之间复杂的相互作用。在本文中,我们提出了一个字符-流体耦合(CFC)的世界模型,用于通过双向耦合模拟人-流体相互作用。我们引入了一个两级世界模型,该模型由一个基于物理信息神经网络(PINN)的流体动力学模型和一个捕捉各种外力作用下身体动力学的特征世界模型组成。该两级世界模型通过力预测熟练地预测流体动力学及其对刚体的影响,避免了流体模拟的计算负担,并为有效的策略训练提供了策略梯度。经过训练后,我们的系统可以控制角色完成高级任务,同时对环境变化做出适应性反应。我们还提出,流体启动紧急行为的角色,增强运动的多样性和互动性。大量的实验强调了CFC的有效性,证明了它能够产生高质量的、逼真的人-流体交互动画。
{"title":"CFC: Simulating Character-Fluid Coupling using a Two-Level World Model","authors":"Zhiyang Dou, Chen Peng, Xinyu Lu, Xiaohan Ye, Lixing Fang, Yuan Liu, Wenping Wang, Chuang Gan, Lingjie Liu, Taku Komura","doi":"10.1145/3763318","DOIUrl":"https://doi.org/10.1145/3763318","url":null,"abstract":"Humans possess the ability to master a wide range of motor skills, enabling them to quickly and flexibly adapt to the surrounding environment. Despite recent progress in replicating such versatile human motor skills, existing research often oversimplifies or inadequately captures the complex interplay between human body movements and highly dynamic environments, such as interactions with fluids. In this paper, we present a world model for Character-Fluid Coupling (CFC) for simulating human-fluid interactions via two-way coupling. We introduce a two-level world model which consists of a Physics-Informed Neural Network (PINN)-based model for fluid dynamics and a character world model capturing body dynamics under various external forces. This two-level world model adeptly predicts the dynamics of fluid and its influence on rigid bodies via force prediction, sidestepping the computational burden of fluid simulation and providing policy gradients for efficient policy training. Once trained, our system can control characters to complete high-level tasks while adaptively responding to environmental changes. We also present that the fluid initiates emergent behaviors of the characters, enhancing motion diversity and interactivity. Extensive experiments underscore the effectiveness of CFC, demonstrating its ability to produce high-quality, realistic human-fluid interaction animations.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"15 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PractiLight: Practical Light Control Using Foundational Diffusion Models practiclight:使用基本扩散模型的实际光控制
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763342
Yotam Erel, Rishabh Dabral, Vladislav Golyanik, Amit H. Bermano, Christian Theobalt
Light control in generated images is a difficult task, posing specific challenges, spanning over the entire image and frequency spectrum. Most approaches tackle this problem by training on extensive yet domain-specific datasets, limiting the inherent generalization and applicability of the foundational backbones used. Instead, PractiLight is a practical approach, effectively leveraging foundational understanding of recent generative models for the task. Our key insight is that lighting relationships in an image are similar in nature to token interaction in self-attention layers, and hence are best represented there. Based on this and other analyses regarding the importance of early diffusion iterations, PractiLight trains a lightweight LoRA regressor to produce the direct-irradiance map for a given image, using a small set of training images. We then employ this regressor to incorporate the desired lighting into the generation process of another image using Classifier Guidance. This careful design generalizes well to diverse conditions and image domains. We demonstrate state-of-the-art performance in terms of quality and control with proven parameter and data efficiency compared to leading works over a wide variety of scene types. We hope this work affirms that image lighting can feasibly be controlled by tapping into foundational knowledge, enabling practical and general relighting.
生成图像中的光控制是一项艰巨的任务,提出了特定的挑战,跨越整个图像和频谱。大多数方法通过在广泛但特定领域的数据集上进行训练来解决这个问题,这限制了所使用的基础主干的固有泛化和适用性。相反,practiclight是一种实用的方法,有效地利用了对最近生成模型的基本理解。我们的关键见解是,图像中的照明关系在本质上类似于自关注层中的令牌交互,因此在那里得到最好的表示。基于这一点和其他关于早期扩散迭代重要性的分析,practiclight训练了一个轻量级的LoRA回归器,使用一小组训练图像为给定图像生成直接辐照度图。然后,我们使用这个回归器将所需的照明合并到使用分类器引导的另一个图像的生成过程中。这种精心的设计可以很好地推广到不同的条件和图像域。与各种场景类型的领先作品相比,我们在质量和控制方面展示了最先进的性能,具有经过验证的参数和数据效率。我们希望这项工作能够证实,通过利用基础知识来控制图像照明是可行的,从而实现实用和通用的再照明。
{"title":"PractiLight: Practical Light Control Using Foundational Diffusion Models","authors":"Yotam Erel, Rishabh Dabral, Vladislav Golyanik, Amit H. Bermano, Christian Theobalt","doi":"10.1145/3763342","DOIUrl":"https://doi.org/10.1145/3763342","url":null,"abstract":"Light control in generated images is a difficult task, posing specific challenges, spanning over the entire image and frequency spectrum. Most approaches tackle this problem by training on extensive yet domain-specific datasets, limiting the inherent generalization and applicability of the foundational backbones used. Instead, <jats:italic toggle=\"yes\">PractiLight</jats:italic> is a practical approach, effectively leveraging foundational understanding of recent generative models for the task. Our key insight is that lighting relationships in an image are similar in nature to token interaction in self-attention layers, and hence are best represented there. Based on this and other analyses regarding the importance of early diffusion iterations, <jats:italic toggle=\"yes\">PractiLight</jats:italic> trains a lightweight LoRA regressor to produce the direct-irradiance map for a given image, using a small set of training images. We then employ this regressor to incorporate the desired lighting into the generation process of another image using Classifier Guidance. This careful design generalizes well to diverse conditions and image domains. We demonstrate state-of-the-art performance in terms of quality and control with proven parameter and data efficiency compared to leading works over a wide variety of scene types. We hope this work affirms that image lighting can feasibly be controlled by tapping into foundational knowledge, enabling practical and general relighting.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"34 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One-shot Embroidery Customization via Contrastive LoRA Modulation 通过对比LoRA调制的一次性刺绣定制
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763290
Jun Ma, Qian He, Gaofeng He, Huang Chen, Chen Liu, Xiaogang Jin, Huamin Wang
Diffusion models have significantly advanced image manipulation techniques, and their ability to generate photorealistic images is beginning to transform retail workflows, particularly in presale visualization. Beyond artistic style transfer, the capability to perform fine-grained visual feature transfer is becoming increasingly important. Embroidery is a textile art form characterized by intricate interplay of diverse stitch patterns and material properties, which poses unique challenges for existing style transfer methods. To explore the customization for such fine-grained features, we propose a novel contrastive learning framework that disentangles fine-grained style and content features with a single reference image, building on the classic concept of image analogy. We first construct an image pair to define the target style, and then adopt a similarity metric based on the decoupled representations of pretrained diffusion models for style-content separation. Subsequently, we propose a two-stage contrastive LoRA modulation technique to capture fine-grained style features. In the first stage, we iteratively update the whole LoRA and the selected style blocks to initially separate style from content. In the second stage, we design a contrastive learning strategy to further decouple style and content through self-knowledge distillation. Finally, we build an inference pipeline to handle image or text inputs with only the style blocks. To evaluate our method on fine-grained style transfer, we build a benchmark for embroidery customization. Our approach surpasses prior methods on this task and further demonstrates strong generalization to three additional domains: artistic style transfer, sketch colorization, and appearance transfer. Our project is available at: https://style3d.github.io/embroidery_customization.
扩散模型具有非常先进的图像处理技术,它们生成逼真图像的能力开始改变零售工作流程,特别是在预售可视化方面。除了艺术风格的转移,进行细粒度视觉特征转移的能力也变得越来越重要。刺绣是一种纺织艺术形式,其特点是各种刺绣图案和材料性能的复杂相互作用,这对现有的风格传递方法提出了独特的挑战。为了探索这种细粒度特征的定制,我们提出了一种新的对比学习框架,该框架基于经典的图像类比概念,将细粒度风格和内容特征与单个参考图像分离开来。我们首先构建一个图像对来定义目标风格,然后采用基于预训练扩散模型的解耦表示的相似性度量来进行风格-内容分离。随后,我们提出了一种两阶段对比LoRA调制技术来捕获细粒度风格特征。在第一阶段,我们迭代地更新整个LoRA和选定的样式块,以最初将样式从内容中分离出来。在第二阶段,我们设计了一种对比学习策略,通过自我知识升华进一步解耦风格和内容。最后,我们构建一个推理管道来处理仅使用样式块的图像或文本输入。为了评估我们的方法对细粒度风格的传递,我们建立了一个刺绣定制的基准。我们的方法在这项任务上超越了先前的方法,并进一步展示了对三个额外领域的强大泛化:艺术风格转移、素描着色和外观转移。我们的项目可在:https://style3d.github.io/embroidery_customization。
{"title":"One-shot Embroidery Customization via Contrastive LoRA Modulation","authors":"Jun Ma, Qian He, Gaofeng He, Huang Chen, Chen Liu, Xiaogang Jin, Huamin Wang","doi":"10.1145/3763290","DOIUrl":"https://doi.org/10.1145/3763290","url":null,"abstract":"Diffusion models have significantly advanced image manipulation techniques, and their ability to generate photorealistic images is beginning to transform retail workflows, particularly in presale visualization. Beyond artistic style transfer, the capability to perform fine-grained visual feature transfer is becoming increasingly important. Embroidery is a textile art form characterized by intricate interplay of diverse stitch patterns and material properties, which poses unique challenges for existing style transfer methods. To explore the customization for such fine-grained features, we propose a novel contrastive learning framework that disentangles fine-grained style and content features with a single reference image, building on the classic concept of image analogy. We first construct an image pair to define the target style, and then adopt a similarity metric based on the decoupled representations of pretrained diffusion models for style-content separation. Subsequently, we propose a two-stage contrastive LoRA modulation technique to capture fine-grained style features. In the first stage, we iteratively update the whole LoRA and the selected style blocks to initially separate style from content. In the second stage, we design a contrastive learning strategy to further decouple style and content through self-knowledge distillation. Finally, we build an inference pipeline to handle image or text inputs with only the style blocks. To evaluate our method on fine-grained style transfer, we build a benchmark for embroidery customization. Our approach surpasses prior methods on this task and further demonstrates strong generalization to three additional domains: artistic style transfer, sketch colorization, and appearance transfer. Our project is available at: https://style3d.github.io/embroidery_customization.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"70 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CrossGen: Learning and Generating Cross Fields for Quad Meshing CrossGen:学习和生成交叉领域的四边形网格
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763299
Qiujie Dong, Jiepeng Wang, Rui Xu, Cheng Lin, Yuan Liu, Shiqing Xin, Zichun Zhong, Xin Li, Changhe Tu, Taku Komura, Leif Kobbelt, Scott Schaefer, Wenping Wang
Cross fields play a critical role in various geometry processing tasks, especially for quad mesh generation. Existing methods for cross field generation often struggle to balance computational efficiency with generation quality, using slow per-shape optimization. We introduce CrossGen , a novel framework that supports both feed-forward prediction and latent generative modeling of cross fields for quad meshing by unifying geometry and cross field representations within a joint latent space. Our method enables extremely fast computation of high-quality cross fields of general input shapes, typically within one second without per-shape optimization. Our method assumes a point-sampled surface, also called a point-cloud surface , as input, so we can accommodate various surface representations by a straightforward point sampling process. Using an auto-encoder network architecture, we encode input point-cloud surfaces into a sparse voxel grid with fine-grained latent spaces, which are decoded into both SDF-based surface geometry and cross fields (see the teaser figure). We also contribute a dataset of models with both high-quality signed distance fields (SDFs) representations and their corresponding cross fields, and use it to train our network. Once trained, the network is capable of computing a cross field of an input surface in a feed-forward manner, ensuring high geometric fidelity, noise resilience, and rapid inference. Furthermore, leveraging the same unified latent representation, we incorporate a diffusion model for computing cross fields of new shapes generated from partial input, such as sketches. To demonstrate its practical applications, we validate CrossGen on the quad mesh generation task for a large variety of surface shapes. Experimental results demonstrate that CrossGen generalizes well across diverse shapes and consistently yields high-fidelity cross fields, thus facilitating the generation of high-quality quad meshes.
交叉场在各种几何处理任务中起着至关重要的作用,特别是在四网格生成中。现有的跨场生成方法往往难以平衡计算效率和生成质量,使用缓慢的每形状优化。我们介绍了CrossGen,这是一个新的框架,通过统一联合潜在空间内的几何和交叉场表示,支持四边形网格交叉场的前馈预测和潜在生成建模。我们的方法可以非常快速地计算一般输入形状的高质量交叉场,通常在一秒钟内,而无需对每个形状进行优化。我们的方法假设一个点采样表面,也称为点云表面,作为输入,因此我们可以通过一个简单的点采样过程来适应各种表面表示。使用自动编码器网络架构,我们将输入点云表面编码为具有细粒度潜在空间的稀疏体素网格,并将其解码为基于sdf的表面几何形状和交叉场(见图)。我们还提供了一个具有高质量签名距离域(sdf)表示及其相应交叉域的模型数据集,并使用它来训练我们的网络。经过训练后,该网络能够以前馈方式计算输入表面的交叉场,从而确保高几何保真度、抗噪声能力和快速推理。此外,利用相同的统一潜在表示,我们结合了一个扩散模型,用于计算由部分输入(如草图)生成的新形状的交叉场。为了演示其实际应用,我们在各种表面形状的四网格生成任务上验证了CrossGen。实验结果表明,CrossGen可以很好地泛化各种形状,并始终产生高保真的交叉场,从而有利于生成高质量的四边形网格。
{"title":"CrossGen: Learning and Generating Cross Fields for Quad Meshing","authors":"Qiujie Dong, Jiepeng Wang, Rui Xu, Cheng Lin, Yuan Liu, Shiqing Xin, Zichun Zhong, Xin Li, Changhe Tu, Taku Komura, Leif Kobbelt, Scott Schaefer, Wenping Wang","doi":"10.1145/3763299","DOIUrl":"https://doi.org/10.1145/3763299","url":null,"abstract":"Cross fields play a critical role in various geometry processing tasks, especially for quad mesh generation. Existing methods for cross field generation often struggle to balance computational efficiency with generation quality, using slow per-shape optimization. We introduce <jats:italic toggle=\"yes\">CrossGen</jats:italic> , a novel framework that supports both feed-forward prediction and latent generative modeling of cross fields for quad meshing by unifying geometry and cross field representations within a joint latent space. Our method enables extremely fast computation of high-quality cross fields of general input shapes, typically within one second without per-shape optimization. Our method assumes a point-sampled surface, also called a <jats:italic toggle=\"yes\">point-cloud surface</jats:italic> , as input, so we can accommodate various surface representations by a straightforward point sampling process. Using an auto-encoder network architecture, we encode input point-cloud surfaces into a sparse voxel grid with fine-grained latent spaces, which are decoded into both SDF-based surface geometry and cross fields (see the teaser figure). We also contribute a dataset of models with both high-quality signed distance fields (SDFs) representations and their corresponding cross fields, and use it to train our network. Once trained, the network is capable of computing a cross field of an input surface in a feed-forward manner, ensuring high geometric fidelity, noise resilience, and rapid inference. Furthermore, leveraging the same unified latent representation, we incorporate a diffusion model for computing cross fields of new shapes generated from partial input, such as sketches. To demonstrate its practical applications, we validate <jats:italic toggle=\"yes\">CrossGen</jats:italic> on the quad mesh generation task for a large variety of surface shapes. Experimental results demonstrate that <jats:italic toggle=\"yes\">CrossGen</jats:italic> generalizes well across diverse shapes and consistently yields high-fidelity cross fields, thus facilitating the generation of high-quality quad meshes.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"26 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Force-Dual Modes: Subspace Design from Stochastic Forces 力-对偶模:随机力的子空间设计
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763310
Otman Benchekroun, Eitan Grinspun, Maurizio Chiaramonte, Philip Allen Etter
Designing subspaces for Reduced Order Modeling (ROM) is crucial for accelerating finite element simulations in graphics and engineering. Unfortunately, it's not always clear which subspace is optimal for arbitrary dynamic simulation. We propose to construct simulation subspaces from force distributions, allowing us to tailor such subspaces to common scene interactions involving constraint penalties, handles-based control, contact and musculoskeletal actuation. To achieve this we adopt a statistical perspective on Reduced Order Modelling, which allows us to push such user-designed force distributions through a linearized simulation to obtain a dual distribution on displacements. To construct our subspace, we then fit a low-rank Gaussian model to this displacement distribution, which we show generalizes Linear Modal Analysis subspaces for uncorrelated unit variance force distributions, as well as Green's Function subspaces for low rank force distributions. We show our framework allows for the construction of subspaces that are optimal both with respect to physical material properties, as well as arbitrary force distributions as observed in handle-based, contact, and musculoskeletal scene interactions.
为降阶建模(ROM)设计子空间对于加速图形学和工程中的有限元仿真是至关重要的。不幸的是,对于任意动态模拟,哪一个子空间是最优的并不总是很清楚。我们建议从力分布构建仿真子空间,允许我们将这些子空间定制为涉及约束惩罚、基于手柄的控制、接触和肌肉骨骼驱动的常见场景交互。为了实现这一点,我们采用了降阶建模的统计角度,这使我们能够通过线性化模拟来推动这种用户设计的力分布,以获得位移的双重分布。为了构建我们的子空间,我们将一个低秩高斯模型拟合到这个位移分布,我们展示了非相关单位方差力分布的广义线性模态分析子空间,以及低秩力分布的格林函数子空间。我们展示了我们的框架允许构建子空间,这些子空间在物理材料特性以及在基于手柄的、接触的和肌肉骨骼场景交互中观察到的任意力分布方面都是最佳的。
{"title":"Force-Dual Modes: Subspace Design from Stochastic Forces","authors":"Otman Benchekroun, Eitan Grinspun, Maurizio Chiaramonte, Philip Allen Etter","doi":"10.1145/3763310","DOIUrl":"https://doi.org/10.1145/3763310","url":null,"abstract":"Designing subspaces for Reduced Order Modeling (ROM) is crucial for accelerating finite element simulations in graphics and engineering. Unfortunately, it's not always clear which subspace is optimal for arbitrary dynamic simulation. We propose to construct simulation subspaces from force distributions, allowing us to tailor such subspaces to common scene interactions involving constraint penalties, handles-based control, contact and musculoskeletal actuation. To achieve this we adopt a statistical perspective on Reduced Order Modelling, which allows us to push such user-designed force distributions through a linearized simulation to obtain a dual distribution on displacements. To construct our subspace, we then fit a low-rank Gaussian model to this displacement distribution, which we show generalizes Linear Modal Analysis subspaces for uncorrelated unit variance force distributions, as well as Green's Function subspaces for low rank force distributions. We show our framework allows for the construction of subspaces that are optimal both with respect to physical material properties, as well as arbitrary force distributions as observed in handle-based, contact, and musculoskeletal scene interactions.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"110 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMF: Template-free and Rig-free Animation Transfer using Kinetic Codes SMF:使用动态代码的无模板和无rig动画传输
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763309
Sanjeev Muralikrishnan, Niladri Shekhar Dutt, Niloy J. Mitra
Animation retargetting applies sparse motion description (e.g., keypoint sequences) to a character mesh to produce a semantically plausible and temporally coherent full-body mesh sequence. Existing approaches come with restrictions - they require access to template-based shape priors or artist-designed deformation rigs, suffer from limited generalization to unseen motion and/or shapes, or exhibit motion jitter. We propose Self-supervised Motion Fields (SMF), a self-supervised framework that is trained with only sparse motion representations, without requiring dataset-specific annotations, templates, or rigs. At the heart of our method are Kinetic Codes, a novel autoencoder-based sparse motion encoding, that exposes a semantically rich latent space, simplifying large-scale training. Our architecture comprises dedicated spatial and temporal gradient predictors, which are jointly trained in an end-to-end fashion. The combined network, regularized by the Kinetic Codes' latent space, has good generalization across both unseen shapes and new motions. We evaluated our method on unseen motion sampled from AMASS, D4D, Mixamo, and raw monocular video for animation transfer on various characters with varying shapes and topology. We report a new SoTA on the AMASS dataset in the context of generalization to unseen motion.
动画重定向将稀疏运动描述(例如关键点序列)应用于角色网格,以产生语义上合理且时间上连贯的全身网格序列。现有的方法有一些限制——它们需要访问基于模板的形状先验或艺术家设计的变形钻机,对看不见的运动和/或形状的泛化有限,或者表现出运动抖动。我们提出了自监督运动场(SMF),这是一种仅使用稀疏运动表示进行训练的自监督框架,不需要特定于数据集的注释、模板或钻机。我们的方法的核心是动态编码,一种新的基于自编码器的稀疏运动编码,它暴露了一个语义丰富的潜在空间,简化了大规模训练。我们的架构包括专用的空间和时间梯度预测器,它们以端到端方式联合训练。该组合网络通过运动码潜空间的正则化,对未见形状和新运动都有很好的泛化效果。我们评估了从AMASS, D4D, Mixamo和原始单目视频中采样的看不见的运动方法,用于对具有不同形状和拓扑的各种字符进行动画传输。我们在概化到看不见的运动的背景下,在AMASS数据集上报告了一个新的SoTA。
{"title":"SMF: Template-free and Rig-free Animation Transfer using Kinetic Codes","authors":"Sanjeev Muralikrishnan, Niladri Shekhar Dutt, Niloy J. Mitra","doi":"10.1145/3763309","DOIUrl":"https://doi.org/10.1145/3763309","url":null,"abstract":"Animation retargetting applies sparse motion description (e.g., keypoint sequences) to a character mesh to produce a semantically plausible and temporally coherent full-body mesh sequence. Existing approaches come with restrictions - they require access to template-based shape priors or artist-designed deformation rigs, suffer from limited generalization to unseen motion and/or shapes, or exhibit motion jitter. We propose Self-supervised Motion Fields (SMF), a self-supervised framework that is trained with only sparse motion representations, without requiring dataset-specific annotations, templates, or rigs. At the heart of our method are Kinetic Codes, a novel autoencoder-based sparse motion encoding, that exposes a semantically rich latent space, simplifying large-scale training. Our architecture comprises dedicated spatial and temporal gradient predictors, which are jointly trained in an end-to-end fashion. The combined network, regularized by the Kinetic Codes' latent space, has good generalization across both unseen shapes and new motions. We evaluated our method on unseen motion sampled from AMASS, D4D, Mixamo, and raw monocular video for animation transfer on various characters with varying shapes and topology. We report a new SoTA on the AMASS dataset in the context of generalization to unseen motion.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"203 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MALeR: Improving Compositional Fidelity in Layout-Guided Generation 在布局引导生成中提高构图保真度
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763341
Shivank Saxena, Dhruv Srivastava, Makarand Tapaswi
Recent advances in text-to-image models have enabled a new era of creative and controllable image generation. However, generating compositional scenes with multiple subjects and attributes remains a significant challenge. To enhance user control over subject placement, several layout-guided methods have been proposed. However, these methods face numerous challenges, particularly in compositional scenes. Unintended subjects often appear outside the layouts, generated images can be out-of-distribution and contain unnatural artifacts, or attributes bleed across subjects, leading to incorrect visual outputs. In this work, we propose MALeR, a method that addresses each of these challenges. Given a text prompt and corresponding layouts, our method prevents subjects from appearing outside the given layouts while being in-distribution. Additionally, we propose a masked, attribute-aware binding mechanism that prevents attribute leakage, enabling accurate rendering of subjects with multiple attributes, even in complex compositional scenes. Qualitative and quantitative evaluation demonstrates that our method achieves superior performance in compositional accuracy, generation consistency, and attribute binding compared to previous work. MALeR is particularly adept at generating images of scenes with multiple subjects and multiple attributes per subject.
文本到图像模型的最新进展开创了一个创造性和可控图像生成的新时代。然而,生成具有多个主题和属性的合成场景仍然是一个重大挑战。为了增强用户对受试者放置的控制,提出了几种布局引导方法。然而,这些方法面临着许多挑战,特别是在合成场景中。意外的主题经常出现在布局之外,生成的图像可能不在分布范围内,并且包含不自然的工件,或者跨主题的属性流血,导致不正确的视觉输出。在这项工作中,我们提出了MALeR,一种解决这些挑战的方法。给定文本提示符和相应的布局,我们的方法可以防止主题在分发时出现在给定布局之外。此外,我们提出了一种屏蔽的、属性感知的绑定机制,该机制可以防止属性泄漏,即使在复杂的合成场景中,也可以准确地呈现具有多个属性的主题。定性和定量评价表明,该方法在合成精度、生成一致性和属性绑定等方面都取得了较好的效果。MALeR特别擅长生成具有多个主题和每个主题多个属性的场景图像。
{"title":"MALeR: Improving Compositional Fidelity in Layout-Guided Generation","authors":"Shivank Saxena, Dhruv Srivastava, Makarand Tapaswi","doi":"10.1145/3763341","DOIUrl":"https://doi.org/10.1145/3763341","url":null,"abstract":"Recent advances in text-to-image models have enabled a new era of creative and controllable image generation. However, generating compositional scenes with multiple subjects and attributes remains a significant challenge. To enhance user control over subject placement, several layout-guided methods have been proposed. However, these methods face numerous challenges, particularly in compositional scenes. Unintended subjects often appear outside the layouts, generated images can be out-of-distribution and contain unnatural artifacts, or attributes bleed across subjects, leading to incorrect visual outputs. In this work, we propose MALeR, a method that addresses each of these challenges. Given a text prompt and corresponding layouts, our method prevents subjects from appearing outside the given layouts while being in-distribution. Additionally, we propose a masked, attribute-aware binding mechanism that prevents attribute leakage, enabling accurate rendering of subjects with multiple attributes, even in complex compositional scenes. Qualitative and quantitative evaluation demonstrates that our method achieves superior performance in compositional accuracy, generation consistency, and attribute binding compared to previous work. MALeR is particularly adept at generating images of scenes with multiple subjects and multiple attributes per subject.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"29 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Marching Neurons: Accurate Surface Extraction for Neural Implicit Shapes 行进神经元:神经隐式形状的精确表面提取
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763328
Christian Stippel, Felix Mujkanovic, Thomas Leimkühler, Pedro Hermosilla
Accurate surface geometry representation is crucial in 3D visual computing. Explicit representations, such as polygonal meshes, and implicit representations, like signed distance functions, each have distinct advantages, making efficient conversions between them increasingly important. Conventional surface extraction methods for implicit representations, such as the widely used Marching Cubes algorithm, rely on spatial decomposition and sampling, leading to inaccuracies due to fixed and limited resolution. We introduce a novel approach for analytically extracting surfaces from neural implicit functions. Our method operates natively in parallel and can navigate large neural architectures. By leveraging the fact that each neuron partitions the domain, we develop a depth-first traversal strategy to efficiently track the encoded surface. The resulting meshes faithfully capture the full geometric information from the network without ad-hoc spatial discretization, achieving unprecedented accuracy across diverse shapes and network architectures while maintaining competitive speed.
精确的曲面几何表示是三维视觉计算的关键。显式表示(如多边形网格)和隐式表示(如带符号距离函数)各有各自的优势,这使得它们之间的有效转换变得越来越重要。传统的隐式表示的表面提取方法,如广泛使用的Marching Cubes算法,依赖于空间分解和采样,由于固定和有限的分辨率而导致不准确。提出了一种从神经隐式函数中解析提取曲面的新方法。我们的方法本身是并行的,可以导航大型神经结构。通过利用每个神经元划分域的事实,我们开发了一种深度优先遍历策略来有效地跟踪编码表面。生成的网格忠实地捕获了来自网络的完整几何信息,没有特别的空间离散化,在保持竞争速度的同时,在不同形状和网络架构中实现了前所未有的精度。
{"title":"Marching Neurons: Accurate Surface Extraction for Neural Implicit Shapes","authors":"Christian Stippel, Felix Mujkanovic, Thomas Leimkühler, Pedro Hermosilla","doi":"10.1145/3763328","DOIUrl":"https://doi.org/10.1145/3763328","url":null,"abstract":"Accurate surface geometry representation is crucial in 3D visual computing. Explicit representations, such as polygonal meshes, and implicit representations, like signed distance functions, each have distinct advantages, making efficient conversions between them increasingly important. Conventional surface extraction methods for implicit representations, such as the widely used Marching Cubes algorithm, rely on spatial decomposition and sampling, leading to inaccuracies due to fixed and limited resolution. We introduce a novel approach for analytically extracting surfaces from neural implicit functions. Our method operates natively in parallel and can navigate large neural architectures. By leveraging the fact that each neuron partitions the domain, we develop a depth-first traversal strategy to efficiently track the encoded surface. The resulting meshes faithfully capture the full geometric information from the network without ad-hoc spatial discretization, achieving unprecedented accuracy across diverse shapes and network architectures while maintaining competitive speed.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"33 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Jackknife Transmittance and MIS Weight Estimation 折刀透光率和MIS权重估计
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763273
Christoph Peters
A core operation in Monte Carlo volume rendering is transmittance estimation: Given a segment along a ray, the goal is to estimate the fraction of light that will pass through this segment without encountering absorption or out-scattering. A naive approach is to estimate optical depth τ using unbiased ray marching and to then use exp(-τ) as transmittance estimate. However, this strategy systematically overestimates transmittance due to Jensen's inequality. On the other hand, existing unbiased transmittance estimators either suffer from high variance or have a cost governed by random decisions, which makes them less suitable for SIMD architectures. We propose a biased transmittance estimator with significantly reduced bias compared to the naive approach and a deterministic and low cost. We observe that ray marching with stratified jittered sampling results in estimates of optical depth that are nearly normal-distributed. We then apply the unique minimum variance unbiased (UMVU) estimator of exp(- τ ) based on two such estimates (using two different sets of random numbers). Bias only arises from violations of the assumption of normal-distributed inputs. We further reduce bias and variance using a variance-aware importance sampling scheme. The underlying theory can be used to estimate any analytic function of optical depth. We use this generalization to estimate multiple importance sampling (MIS) weights and introduce two integrators: Unbiased MIS with biased MIS weights and a more efficient but biased combination of MIS and transmittance estimation.
蒙特卡罗体绘制中的一个核心操作是透光率估计:给定沿射线的一段,目标是估计在不遇到吸收或向外散射的情况下通过该段的光的比例。一种朴素的方法是使用无偏射线推进估计光学深度τ,然后使用exp(-τ)作为透射率估计。然而,由于詹森不等式,这种策略系统地高估了透光率。另一方面,现有的无偏透射率估计器要么存在高方差,要么有随机决策控制的成本,这使得它们不太适合SIMD体系结构。我们提出了一种偏差透射率估计器,与原始方法相比,偏差显著降低,并且具有确定性和低成本。我们观察到,分层抖动采样的射线行军结果光学深度估计接近正态分布。然后,我们应用基于两个这样的估计(使用两组不同的随机数)的唯一最小方差无偏(UMVU)估计量exp(- τ)。偏差只产生于违反正态分布输入的假设。我们进一步减少偏差和方差使用方差感知的重要性抽样方案。其基础理论可用于估计任何光学深度的解析函数。我们使用这种泛化来估计多重重要抽样(MIS)的权重,并引入两个积分器:具有有偏MIS权重的无偏MIS和更有效但有偏的MIS和透射率估计的组合。
{"title":"Jackknife Transmittance and MIS Weight Estimation","authors":"Christoph Peters","doi":"10.1145/3763273","DOIUrl":"https://doi.org/10.1145/3763273","url":null,"abstract":"A core operation in Monte Carlo volume rendering is transmittance estimation: Given a segment along a ray, the goal is to estimate the fraction of light that will pass through this segment without encountering absorption or out-scattering. A naive approach is to estimate optical depth τ using unbiased ray marching and to then use exp(-τ) as transmittance estimate. However, this strategy systematically overestimates transmittance due to Jensen's inequality. On the other hand, existing unbiased transmittance estimators either suffer from high variance or have a cost governed by random decisions, which makes them less suitable for SIMD architectures. We propose a biased transmittance estimator with significantly reduced bias compared to the naive approach and a deterministic and low cost. We observe that ray marching with stratified jittered sampling results in estimates of optical depth that are nearly normal-distributed. We then apply the unique minimum variance unbiased (UMVU) estimator of exp(- <jats:italic toggle=\"yes\">τ</jats:italic> ) based on two such estimates (using two different sets of random numbers). Bias only arises from violations of the assumption of normal-distributed inputs. We further reduce bias and variance using a variance-aware importance sampling scheme. The underlying theory can be used to estimate any analytic function of optical depth. We use this generalization to estimate multiple importance sampling (MIS) weights and introduce two integrators: Unbiased MIS with biased MIS weights and a more efficient but biased combination of MIS and transmittance estimation.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"155 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1