首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
Automatic Sampling for Discontinuities in Differentiable Shaders 可微分着色器中不连续点的自动采样
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763291
Yash Belhe, Ishit Mehta, Wesley Chang, Iliyan Georgiev, Michael Gharbi, Ravi Ramamoorthi, Tzu-Mao Li
We present a novel method to differentiate integrals of discontinuous functions, which are common in inverse graphics, computer vision, and machine learning applications. Previous methods either require specialized routines to sample the discontinuous boundaries of predetermined primitives, or use reparameterization techniques that suffer from high variance. In contrast, our method handles general discontinuous functions, expressed as shader programs, without requiring manually specified boundary sampling routines. We achieve this through a program transformation that converts discontinuous functions into piecewise constant ones, enabling efficient boundary sampling through a novel segment snapping technique, and accurate derivatives at the boundary by simply comparing values on both sides of the discontinuity. Our method handles both explicit boundaries (polygons, ellipses, Bézier curves) and implicit ones (neural networks, noise-based functions, swept surfaces). We demonstrate that our system supports a wide range of applications, including painterly rendering, raster image fitting, constructive solid geometry, swept surfaces, mosaicing, and ray marching.
我们提出了一种新的方法来微分不连续函数的积分,这在逆图,计算机视觉和机器学习应用中很常见。以前的方法要么需要专门的例程来采样预定原语的不连续边界,要么使用高方差的重新参数化技术。相比之下,我们的方法处理一般的不连续函数,表示为着色器程序,不需要手动指定边界采样例程。我们通过将不连续函数转换为分段常数函数的程序转换来实现这一点,通过一种新的片段捕捉技术实现有效的边界采样,并通过简单地比较不连续两侧的值来实现边界上的精确导数。我们的方法可以处理显式边界(多边形、椭圆、bsamzier曲线)和隐式边界(神经网络、基于噪声的函数、扫描表面)。我们证明了我们的系统支持广泛的应用,包括绘画渲染,光栅图像拟合,构造实体几何,扫描表面,马赛克和射线推进。
{"title":"Automatic Sampling for Discontinuities in Differentiable Shaders","authors":"Yash Belhe, Ishit Mehta, Wesley Chang, Iliyan Georgiev, Michael Gharbi, Ravi Ramamoorthi, Tzu-Mao Li","doi":"10.1145/3763291","DOIUrl":"https://doi.org/10.1145/3763291","url":null,"abstract":"We present a novel method to differentiate integrals of discontinuous functions, which are common in inverse graphics, computer vision, and machine learning applications. Previous methods either require specialized routines to sample the discontinuous boundaries of predetermined primitives, or use reparameterization techniques that suffer from high variance. In contrast, our method handles general discontinuous functions, expressed as shader programs, without requiring manually specified boundary sampling routines. We achieve this through a program transformation that converts discontinuous functions into piecewise constant ones, enabling efficient boundary sampling through a novel segment snapping technique, and accurate derivatives at the boundary by simply comparing values on both sides of the discontinuity. Our method handles both explicit boundaries (polygons, ellipses, Bézier curves) and implicit ones (neural networks, noise-based functions, swept surfaces). We demonstrate that our system supports a wide range of applications, including painterly rendering, raster image fitting, constructive solid geometry, swept surfaces, mosaicing, and ray marching.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"125 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Derivative Estimation with Walk on Stars 星上行走的鲁棒导数估计
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763333
Zihan Yu, Rohan Sawhney, Bailey Miller, Lifan Wu, Shuang Zhao
Monte Carlo methods based on the walk on spheres (WoS) algorithm offer a parallel, progressive, and output-sensitive approach for solving partial differential equations (PDEs) in complex geometric domains. Building on this foundation, the walk on stars (WoSt) method generalizes WoS to support mixed Dirichlet, Neumann, and Robin boundary conditions. However, accurately computing spatial derivatives of PDE solutions remains a major challenge: existing methods exhibit high variance and bias near the domain boundary, especially in Neumann-dominated problems. We address this limitation with a new extension of WoSt specifically designed for derivative estimation. Our method reformulates the boundary integral equation (BIE) for Poisson PDEs by directly leveraging the harmonicity of spatial derivatives. Combined with a tailored random-walk sampling scheme and an unbiased early termination strategy, we achieve significantly improved accuracy in derivative estimates near the Neumann boundary. We further demonstrate the effectiveness of our approach across various tasks, including recovering the non-unique solution to a pure Neumann problem with reduced bias and variance, constructing divergence-free vector fields, and optimizing parametrically defined boundaries under PDE constraints.
基于球上行走(WoS)算法的蒙特卡罗方法为求解复杂几何域的偏微分方程(PDEs)提供了一种并行、渐进和输出敏感的方法。在此基础上,星光漫步(WoSt)方法将WoS推广到支持Dirichlet、Neumann和Robin混合边界条件。然而,精确计算PDE解的空间导数仍然是一个主要挑战:现有方法在域边界附近表现出高方差和偏差,特别是在neumann主导问题中。我们通过专门为导数估计设计的WoSt的新扩展来解决这一限制。我们的方法通过直接利用空间导数的调和性,重新表述了泊松偏微分方程的边界积分方程(BIE)。结合量身定制的随机漫步抽样方案和无偏早期终止策略,我们在Neumann边界附近的导数估计中获得了显着提高的准确性。我们进一步证明了我们的方法在各种任务中的有效性,包括恢复具有减少偏差和方差的纯诺伊曼问题的非唯一解,构造无散度向量场,以及在PDE约束下优化参数定义边界。
{"title":"Robust Derivative Estimation with Walk on Stars","authors":"Zihan Yu, Rohan Sawhney, Bailey Miller, Lifan Wu, Shuang Zhao","doi":"10.1145/3763333","DOIUrl":"https://doi.org/10.1145/3763333","url":null,"abstract":"Monte Carlo methods based on the walk on spheres (WoS) algorithm offer a parallel, progressive, and output-sensitive approach for solving partial differential equations (PDEs) in complex geometric domains. Building on this foundation, the walk on stars (WoSt) method generalizes WoS to support mixed Dirichlet, Neumann, and Robin boundary conditions. However, accurately computing spatial derivatives of PDE solutions remains a major challenge: existing methods exhibit high variance and bias near the domain boundary, especially in Neumann-dominated problems. We address this limitation with a new extension of WoSt specifically designed for derivative estimation. Our method reformulates the boundary integral equation (BIE) for Poisson PDEs by directly leveraging the harmonicity of spatial derivatives. Combined with a tailored random-walk sampling scheme and an unbiased early termination strategy, we achieve significantly improved accuracy in derivative estimates near the Neumann boundary. We further demonstrate the effectiveness of our approach across various tasks, including recovering the non-unique solution to a pure Neumann problem with reduced bias and variance, constructing divergence-free vector fields, and optimizing parametrically defined boundaries under PDE constraints.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"33 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating and Sampling Glinty NDFs in Constant Time 常数时间内粘性ndf的评估与采样
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763282
Pauli Kemppinen, Loïs Paulin, Théo Thonat, Jean-Marc Thiery, Jaakko Lehtinen, Tamy Boubekeur
Geometric features between the micro and macro scales produce an expressive family of visual effects grouped under the term "glints". Efficiently rendering these effects amounts to finding the highlights caused by the geometry under each pixel. To allow for fast rendering, we represent our faceted geometry as a 4D point process on an implicit multiscale grid, designed to efficiently find the facets most likely to cause a highlight. The facets' normals are generated to match a given micro-facet normal distribution such as Trowbridge-Reitz (GGX) or Beckmann, to which our model converges under increasing surface area. Our method is simple to implement, memory-and-precomputation-free, allows for importance sampling and covers a wide range of different appearances such as anisotropic as well as individually colored particles. We provide a base implementation as a standalone fragment shader.
微观和宏观尺度之间的几何特征产生了一系列富有表现力的视觉效果,统称为“闪烁”。有效地渲染这些效果相当于在每个像素下找到由几何形状引起的高光。为了允许快速渲染,我们在隐式多尺度网格上将我们的面几何表示为4D点过程,旨在有效地找到最有可能引起高亮的面。facet的法线是为了匹配给定的微facet正态分布(如Trowbridge-Reitz (GGX)或Beckmann)而生成的,我们的模型在增加表面积的情况下收敛于此。我们的方法易于实现,不需要内存和预计算,允许重要采样,并涵盖了各种不同的外观,如各向异性和单独着色的颗粒。我们提供了一个基本实现作为一个独立的片段着色器。
{"title":"Evaluating and Sampling Glinty NDFs in Constant Time","authors":"Pauli Kemppinen, Loïs Paulin, Théo Thonat, Jean-Marc Thiery, Jaakko Lehtinen, Tamy Boubekeur","doi":"10.1145/3763282","DOIUrl":"https://doi.org/10.1145/3763282","url":null,"abstract":"Geometric features between the micro and macro scales produce an expressive family of visual effects grouped under the term \"glints\". Efficiently rendering these effects amounts to finding the highlights caused by the geometry under each pixel. To allow for fast rendering, we represent our faceted geometry as a 4D point process on an implicit multiscale grid, designed to efficiently find the facets most likely to cause a highlight. The facets' normals are generated to match a given micro-facet normal distribution such as Trowbridge-Reitz (GGX) or Beckmann, to which our model converges under increasing surface area. Our method is simple to implement, memory-and-precomputation-free, allows for importance sampling and covers a wide range of different appearances such as anisotropic as well as individually colored particles. We provide a base implementation as a standalone fragment shader.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ConsiStyle: Style Diversity in Training-Free Consistent T2I Generation 一致性:不需要训练的一致性T2I世代的风格多样性
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763303
Yohai Mazuz, Janna Bruner, Lior Wolf
In text-to-image models, consistent character generation is the task of achieving text alignment while maintaining the subject's appearance across different prompts. However, since style and appearance are often entangled, the existing methods struggle to preserve consistent subject characteristics while adhering to varying style prompts. Current approaches for consistent text-to-image generation typically rely on large-scale fine-tuning on curated image sets or per-subject optimization, which either fail to generalize across prompts or do not align well with textual descriptions. Meanwhile, training-free methods often fail to maintain subject consistency across different styles. In this work, we introduce a training-free method that, for the first time, jointly achieves style preservation and subject consistency across varied styles. The attention matrices are manipulated such that Queries and Keys are obtained from the anchor image(s) that are used to define the subject, while the Values are imported from a parallel copy that is not subject-anchored. Additionally, cross-image components are added to the self-attention mechanism by expanding the Key and Value matrices. To do without shifting from the target style, we align the statistics of the Value matrices. As is demonstrated in a comprehensive battery of qualitative and quantitative experiments, our method effectively decouples style from subject appearance and enables faithful generation of text-aligned images with consistent characters across diverse styles. Code will be available at our project page: jbruner23.github.io/consistyle.
在文本到图像模型中,一致的字符生成任务是实现文本对齐,同时在不同的提示中保持主题的外观。然而,由于风格和外观经常纠缠在一起,现有的方法很难保持一致的主题特征,同时坚持不同的风格提示。当前用于一致的文本到图像生成的方法通常依赖于对策划的图像集或每个主题的优化进行大规模微调,这些方法要么无法在提示间进行泛化,要么不能很好地与文本描述对齐。同时,无训练的方法往往不能保持不同风格的主题一致性。在这项工作中,我们引入了一种无需训练的方法,首次在不同风格之间共同实现了风格保留和主题一致性。对注意力矩阵进行操作,以便从用于定义主题的锚定图像中获得查询和键,而从非主题锚定的并行副本中导入值。此外,通过扩展键和值矩阵,将交叉图像组件添加到自关注机制中。为了不改变目标样式,我们对齐了Value矩阵的统计数据。正如定性和定量实验所证明的那样,我们的方法有效地将风格与主题外观分离开来,并能够忠实地生成具有不同风格的一致字符的文本对齐图像。代码将在我们的项目页面上提供:jbruner23.github.io/consistyle。
{"title":"ConsiStyle: Style Diversity in Training-Free Consistent T2I Generation","authors":"Yohai Mazuz, Janna Bruner, Lior Wolf","doi":"10.1145/3763303","DOIUrl":"https://doi.org/10.1145/3763303","url":null,"abstract":"In text-to-image models, consistent character generation is the task of achieving text alignment while maintaining the subject's appearance across different prompts. However, since style and appearance are often entangled, the existing methods struggle to preserve consistent subject characteristics while adhering to varying style prompts. Current approaches for consistent text-to-image generation typically rely on large-scale fine-tuning on curated image sets or per-subject optimization, which either fail to generalize across prompts or do not align well with textual descriptions. Meanwhile, training-free methods often fail to maintain subject consistency across different styles. In this work, we introduce a training-free method that, for the first time, jointly achieves style preservation and subject consistency across varied styles. The attention matrices are manipulated such that Queries and Keys are obtained from the anchor image(s) that are used to define the subject, while the Values are imported from a parallel copy that is not subject-anchored. Additionally, cross-image components are added to the self-attention mechanism by expanding the Key and Value matrices. To do without shifting from the target style, we align the statistics of the Value matrices. As is demonstrated in a comprehensive battery of qualitative and quantitative experiments, our method effectively decouples style from subject appearance and enables faithful generation of text-aligned images with consistent characters across diverse styles. Code will be available at our project page: jbruner23.github.io/consistyle.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"4 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight, Edge-Aware, and Temporally Consistent Supersampling for Mobile Real-Time Rendering 用于移动实时渲染的轻量级、边缘感知和时间一致的超采样
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763348
Sipeng Yang, Jiayu Ji, Junhao Zhuge, Jinzhe Zhao, Qiang Qiu, Chen Li, Yuzhong Yan, Kerong Wang, Lingqi Yan, Xiaogang Jin
Supersampling has proven highly effective in enhancing visual fidelity by reducing aliasing, increasing resolution, and generating interpolated frames. It has become a standard component of modern real-time rendering pipelines. However, on mobile platforms, deep learning-based supersampling methods remain impractical due to stringent hardware constraints, while non-neural supersampling techniques often fall short in delivering perceptually high-quality results. In particular, producing visually pleasing reconstructions and temporally coherent interpolations is still a significant challenge in mobile settings. In this work, we present a novel, lightweight supersampling framework tailored for mobile devices. Our approach substantially improves both image reconstruction quality and temporal consistency while maintaining real-time performance. For super-resolution, we propose an intra-pixel object coverage estimation method for reconstructing high-quality anti-aliased pixels in edge regions, a gradient-guided strategy for non-edge areas, and a temporal sample accumulation approach to improve overall image quality. For frame interpolation, we develop an efficient motion estimation module coupled with a lightweight fusion scheme that integrates both estimated optical flow and rendered motion vectors, enabling temporally coherent interpolation of object dynamics and lighting variations. Extensive experiments demonstrate that our method consistently outperforms existing baselines in both perceptual image quality and temporal smoothness, while maintaining real-time performance on mobile GPUs. A demo application and supplementary materials are available on the project page.
通过减少混叠、提高分辨率和生成插值帧,超采样已被证明在增强视觉保真度方面非常有效。它已经成为现代实时渲染管道的标准组件。然而,在移动平台上,由于严格的硬件限制,基于深度学习的超采样方法仍然不切实际,而非神经超采样技术往往无法提供高质量的感知结果。特别是,在移动环境中,产生视觉上令人愉悦的重建和时间上连贯的插值仍然是一个重大挑战。在这项工作中,我们提出了一种针对移动设备量身定制的新型轻量级超采样框架。我们的方法在保持实时性能的同时大大提高了图像重建质量和时间一致性。对于超分辨率,我们提出了用于在边缘区域重建高质量抗混叠像素的像素内目标覆盖估计方法,用于非边缘区域的梯度引导策略,以及用于提高整体图像质量的时间样本积累方法。对于帧插值,我们开发了一个高效的运动估计模块,并结合了一个轻量级的融合方案,该方案集成了估计的光流和渲染的运动向量,从而实现了物体动力学和光照变化的时间相干插值。大量的实验表明,我们的方法在感知图像质量和时间平滑性方面始终优于现有的基线,同时在移动gpu上保持实时性能。演示应用程序和补充材料可在项目页面上获得。
{"title":"Lightweight, Edge-Aware, and Temporally Consistent Supersampling for Mobile Real-Time Rendering","authors":"Sipeng Yang, Jiayu Ji, Junhao Zhuge, Jinzhe Zhao, Qiang Qiu, Chen Li, Yuzhong Yan, Kerong Wang, Lingqi Yan, Xiaogang Jin","doi":"10.1145/3763348","DOIUrl":"https://doi.org/10.1145/3763348","url":null,"abstract":"Supersampling has proven highly effective in enhancing visual fidelity by reducing aliasing, increasing resolution, and generating interpolated frames. It has become a standard component of modern real-time rendering pipelines. However, on mobile platforms, deep learning-based supersampling methods remain impractical due to stringent hardware constraints, while non-neural supersampling techniques often fall short in delivering perceptually high-quality results. In particular, producing visually pleasing reconstructions and temporally coherent interpolations is still a significant challenge in mobile settings. In this work, we present a novel, lightweight supersampling framework tailored for mobile devices. Our approach substantially improves both image reconstruction quality and temporal consistency while maintaining real-time performance. For super-resolution, we propose an intra-pixel object coverage estimation method for reconstructing high-quality anti-aliased pixels in edge regions, a gradient-guided strategy for non-edge areas, and a temporal sample accumulation approach to improve overall image quality. For frame interpolation, we develop an efficient motion estimation module coupled with a lightweight fusion scheme that integrates both estimated optical flow and rendered motion vectors, enabling temporally coherent interpolation of object dynamics and lighting variations. Extensive experiments demonstrate that our method consistently outperforms existing baselines in both perceptual image quality and temporal smoothness, while maintaining real-time performance on mobile GPUs. A demo application and supplementary materials are available on the project page.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"125 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CFC: Simulating Character-Fluid Coupling using a Two-Level World Model 用两级世界模型模拟字符-流体耦合
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763318
Zhiyang Dou, Chen Peng, Xinyu Lu, Xiaohan Ye, Lixing Fang, Yuan Liu, Wenping Wang, Chuang Gan, Lingjie Liu, Taku Komura
Humans possess the ability to master a wide range of motor skills, enabling them to quickly and flexibly adapt to the surrounding environment. Despite recent progress in replicating such versatile human motor skills, existing research often oversimplifies or inadequately captures the complex interplay between human body movements and highly dynamic environments, such as interactions with fluids. In this paper, we present a world model for Character-Fluid Coupling (CFC) for simulating human-fluid interactions via two-way coupling. We introduce a two-level world model which consists of a Physics-Informed Neural Network (PINN)-based model for fluid dynamics and a character world model capturing body dynamics under various external forces. This two-level world model adeptly predicts the dynamics of fluid and its influence on rigid bodies via force prediction, sidestepping the computational burden of fluid simulation and providing policy gradients for efficient policy training. Once trained, our system can control characters to complete high-level tasks while adaptively responding to environmental changes. We also present that the fluid initiates emergent behaviors of the characters, enhancing motion diversity and interactivity. Extensive experiments underscore the effectiveness of CFC, demonstrating its ability to produce high-quality, realistic human-fluid interaction animations.
人类拥有掌握各种运动技能的能力,使他们能够快速灵活地适应周围的环境。尽管最近在复制这种多用途的人类运动技能方面取得了进展,但现有的研究往往过于简化或不充分地捕捉到人体运动与高度动态环境(如与流体的相互作用)之间复杂的相互作用。在本文中,我们提出了一个字符-流体耦合(CFC)的世界模型,用于通过双向耦合模拟人-流体相互作用。我们引入了一个两级世界模型,该模型由一个基于物理信息神经网络(PINN)的流体动力学模型和一个捕捉各种外力作用下身体动力学的特征世界模型组成。该两级世界模型通过力预测熟练地预测流体动力学及其对刚体的影响,避免了流体模拟的计算负担,并为有效的策略训练提供了策略梯度。经过训练后,我们的系统可以控制角色完成高级任务,同时对环境变化做出适应性反应。我们还提出,流体启动紧急行为的角色,增强运动的多样性和互动性。大量的实验强调了CFC的有效性,证明了它能够产生高质量的、逼真的人-流体交互动画。
{"title":"CFC: Simulating Character-Fluid Coupling using a Two-Level World Model","authors":"Zhiyang Dou, Chen Peng, Xinyu Lu, Xiaohan Ye, Lixing Fang, Yuan Liu, Wenping Wang, Chuang Gan, Lingjie Liu, Taku Komura","doi":"10.1145/3763318","DOIUrl":"https://doi.org/10.1145/3763318","url":null,"abstract":"Humans possess the ability to master a wide range of motor skills, enabling them to quickly and flexibly adapt to the surrounding environment. Despite recent progress in replicating such versatile human motor skills, existing research often oversimplifies or inadequately captures the complex interplay between human body movements and highly dynamic environments, such as interactions with fluids. In this paper, we present a world model for Character-Fluid Coupling (CFC) for simulating human-fluid interactions via two-way coupling. We introduce a two-level world model which consists of a Physics-Informed Neural Network (PINN)-based model for fluid dynamics and a character world model capturing body dynamics under various external forces. This two-level world model adeptly predicts the dynamics of fluid and its influence on rigid bodies via force prediction, sidestepping the computational burden of fluid simulation and providing policy gradients for efficient policy training. Once trained, our system can control characters to complete high-level tasks while adaptively responding to environmental changes. We also present that the fluid initiates emergent behaviors of the characters, enhancing motion diversity and interactivity. Extensive experiments underscore the effectiveness of CFC, demonstrating its ability to produce high-quality, realistic human-fluid interaction animations.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"15 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PractiLight: Practical Light Control Using Foundational Diffusion Models practiclight:使用基本扩散模型的实际光控制
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763342
Yotam Erel, Rishabh Dabral, Vladislav Golyanik, Amit H. Bermano, Christian Theobalt
Light control in generated images is a difficult task, posing specific challenges, spanning over the entire image and frequency spectrum. Most approaches tackle this problem by training on extensive yet domain-specific datasets, limiting the inherent generalization and applicability of the foundational backbones used. Instead, PractiLight is a practical approach, effectively leveraging foundational understanding of recent generative models for the task. Our key insight is that lighting relationships in an image are similar in nature to token interaction in self-attention layers, and hence are best represented there. Based on this and other analyses regarding the importance of early diffusion iterations, PractiLight trains a lightweight LoRA regressor to produce the direct-irradiance map for a given image, using a small set of training images. We then employ this regressor to incorporate the desired lighting into the generation process of another image using Classifier Guidance. This careful design generalizes well to diverse conditions and image domains. We demonstrate state-of-the-art performance in terms of quality and control with proven parameter and data efficiency compared to leading works over a wide variety of scene types. We hope this work affirms that image lighting can feasibly be controlled by tapping into foundational knowledge, enabling practical and general relighting.
生成图像中的光控制是一项艰巨的任务,提出了特定的挑战,跨越整个图像和频谱。大多数方法通过在广泛但特定领域的数据集上进行训练来解决这个问题,这限制了所使用的基础主干的固有泛化和适用性。相反,practiclight是一种实用的方法,有效地利用了对最近生成模型的基本理解。我们的关键见解是,图像中的照明关系在本质上类似于自关注层中的令牌交互,因此在那里得到最好的表示。基于这一点和其他关于早期扩散迭代重要性的分析,practiclight训练了一个轻量级的LoRA回归器,使用一小组训练图像为给定图像生成直接辐照度图。然后,我们使用这个回归器将所需的照明合并到使用分类器引导的另一个图像的生成过程中。这种精心的设计可以很好地推广到不同的条件和图像域。与各种场景类型的领先作品相比,我们在质量和控制方面展示了最先进的性能,具有经过验证的参数和数据效率。我们希望这项工作能够证实,通过利用基础知识来控制图像照明是可行的,从而实现实用和通用的再照明。
{"title":"PractiLight: Practical Light Control Using Foundational Diffusion Models","authors":"Yotam Erel, Rishabh Dabral, Vladislav Golyanik, Amit H. Bermano, Christian Theobalt","doi":"10.1145/3763342","DOIUrl":"https://doi.org/10.1145/3763342","url":null,"abstract":"Light control in generated images is a difficult task, posing specific challenges, spanning over the entire image and frequency spectrum. Most approaches tackle this problem by training on extensive yet domain-specific datasets, limiting the inherent generalization and applicability of the foundational backbones used. Instead, <jats:italic toggle=\"yes\">PractiLight</jats:italic> is a practical approach, effectively leveraging foundational understanding of recent generative models for the task. Our key insight is that lighting relationships in an image are similar in nature to token interaction in self-attention layers, and hence are best represented there. Based on this and other analyses regarding the importance of early diffusion iterations, <jats:italic toggle=\"yes\">PractiLight</jats:italic> trains a lightweight LoRA regressor to produce the direct-irradiance map for a given image, using a small set of training images. We then employ this regressor to incorporate the desired lighting into the generation process of another image using Classifier Guidance. This careful design generalizes well to diverse conditions and image domains. We demonstrate state-of-the-art performance in terms of quality and control with proven parameter and data efficiency compared to leading works over a wide variety of scene types. We hope this work affirms that image lighting can feasibly be controlled by tapping into foundational knowledge, enabling practical and general relighting.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"34 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One-shot Embroidery Customization via Contrastive LoRA Modulation 通过对比LoRA调制的一次性刺绣定制
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763290
Jun Ma, Qian He, Gaofeng He, Huang Chen, Chen Liu, Xiaogang Jin, Huamin Wang
Diffusion models have significantly advanced image manipulation techniques, and their ability to generate photorealistic images is beginning to transform retail workflows, particularly in presale visualization. Beyond artistic style transfer, the capability to perform fine-grained visual feature transfer is becoming increasingly important. Embroidery is a textile art form characterized by intricate interplay of diverse stitch patterns and material properties, which poses unique challenges for existing style transfer methods. To explore the customization for such fine-grained features, we propose a novel contrastive learning framework that disentangles fine-grained style and content features with a single reference image, building on the classic concept of image analogy. We first construct an image pair to define the target style, and then adopt a similarity metric based on the decoupled representations of pretrained diffusion models for style-content separation. Subsequently, we propose a two-stage contrastive LoRA modulation technique to capture fine-grained style features. In the first stage, we iteratively update the whole LoRA and the selected style blocks to initially separate style from content. In the second stage, we design a contrastive learning strategy to further decouple style and content through self-knowledge distillation. Finally, we build an inference pipeline to handle image or text inputs with only the style blocks. To evaluate our method on fine-grained style transfer, we build a benchmark for embroidery customization. Our approach surpasses prior methods on this task and further demonstrates strong generalization to three additional domains: artistic style transfer, sketch colorization, and appearance transfer. Our project is available at: https://style3d.github.io/embroidery_customization.
扩散模型具有非常先进的图像处理技术,它们生成逼真图像的能力开始改变零售工作流程,特别是在预售可视化方面。除了艺术风格的转移,进行细粒度视觉特征转移的能力也变得越来越重要。刺绣是一种纺织艺术形式,其特点是各种刺绣图案和材料性能的复杂相互作用,这对现有的风格传递方法提出了独特的挑战。为了探索这种细粒度特征的定制,我们提出了一种新的对比学习框架,该框架基于经典的图像类比概念,将细粒度风格和内容特征与单个参考图像分离开来。我们首先构建一个图像对来定义目标风格,然后采用基于预训练扩散模型的解耦表示的相似性度量来进行风格-内容分离。随后,我们提出了一种两阶段对比LoRA调制技术来捕获细粒度风格特征。在第一阶段,我们迭代地更新整个LoRA和选定的样式块,以最初将样式从内容中分离出来。在第二阶段,我们设计了一种对比学习策略,通过自我知识升华进一步解耦风格和内容。最后,我们构建一个推理管道来处理仅使用样式块的图像或文本输入。为了评估我们的方法对细粒度风格的传递,我们建立了一个刺绣定制的基准。我们的方法在这项任务上超越了先前的方法,并进一步展示了对三个额外领域的强大泛化:艺术风格转移、素描着色和外观转移。我们的项目可在:https://style3d.github.io/embroidery_customization。
{"title":"One-shot Embroidery Customization via Contrastive LoRA Modulation","authors":"Jun Ma, Qian He, Gaofeng He, Huang Chen, Chen Liu, Xiaogang Jin, Huamin Wang","doi":"10.1145/3763290","DOIUrl":"https://doi.org/10.1145/3763290","url":null,"abstract":"Diffusion models have significantly advanced image manipulation techniques, and their ability to generate photorealistic images is beginning to transform retail workflows, particularly in presale visualization. Beyond artistic style transfer, the capability to perform fine-grained visual feature transfer is becoming increasingly important. Embroidery is a textile art form characterized by intricate interplay of diverse stitch patterns and material properties, which poses unique challenges for existing style transfer methods. To explore the customization for such fine-grained features, we propose a novel contrastive learning framework that disentangles fine-grained style and content features with a single reference image, building on the classic concept of image analogy. We first construct an image pair to define the target style, and then adopt a similarity metric based on the decoupled representations of pretrained diffusion models for style-content separation. Subsequently, we propose a two-stage contrastive LoRA modulation technique to capture fine-grained style features. In the first stage, we iteratively update the whole LoRA and the selected style blocks to initially separate style from content. In the second stage, we design a contrastive learning strategy to further decouple style and content through self-knowledge distillation. Finally, we build an inference pipeline to handle image or text inputs with only the style blocks. To evaluate our method on fine-grained style transfer, we build a benchmark for embroidery customization. Our approach surpasses prior methods on this task and further demonstrates strong generalization to three additional domains: artistic style transfer, sketch colorization, and appearance transfer. Our project is available at: https://style3d.github.io/embroidery_customization.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"70 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CrossGen: Learning and Generating Cross Fields for Quad Meshing CrossGen:学习和生成交叉领域的四边形网格
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763299
Qiujie Dong, Jiepeng Wang, Rui Xu, Cheng Lin, Yuan Liu, Shiqing Xin, Zichun Zhong, Xin Li, Changhe Tu, Taku Komura, Leif Kobbelt, Scott Schaefer, Wenping Wang
Cross fields play a critical role in various geometry processing tasks, especially for quad mesh generation. Existing methods for cross field generation often struggle to balance computational efficiency with generation quality, using slow per-shape optimization. We introduce CrossGen , a novel framework that supports both feed-forward prediction and latent generative modeling of cross fields for quad meshing by unifying geometry and cross field representations within a joint latent space. Our method enables extremely fast computation of high-quality cross fields of general input shapes, typically within one second without per-shape optimization. Our method assumes a point-sampled surface, also called a point-cloud surface , as input, so we can accommodate various surface representations by a straightforward point sampling process. Using an auto-encoder network architecture, we encode input point-cloud surfaces into a sparse voxel grid with fine-grained latent spaces, which are decoded into both SDF-based surface geometry and cross fields (see the teaser figure). We also contribute a dataset of models with both high-quality signed distance fields (SDFs) representations and their corresponding cross fields, and use it to train our network. Once trained, the network is capable of computing a cross field of an input surface in a feed-forward manner, ensuring high geometric fidelity, noise resilience, and rapid inference. Furthermore, leveraging the same unified latent representation, we incorporate a diffusion model for computing cross fields of new shapes generated from partial input, such as sketches. To demonstrate its practical applications, we validate CrossGen on the quad mesh generation task for a large variety of surface shapes. Experimental results demonstrate that CrossGen generalizes well across diverse shapes and consistently yields high-fidelity cross fields, thus facilitating the generation of high-quality quad meshes.
交叉场在各种几何处理任务中起着至关重要的作用,特别是在四网格生成中。现有的跨场生成方法往往难以平衡计算效率和生成质量,使用缓慢的每形状优化。我们介绍了CrossGen,这是一个新的框架,通过统一联合潜在空间内的几何和交叉场表示,支持四边形网格交叉场的前馈预测和潜在生成建模。我们的方法可以非常快速地计算一般输入形状的高质量交叉场,通常在一秒钟内,而无需对每个形状进行优化。我们的方法假设一个点采样表面,也称为点云表面,作为输入,因此我们可以通过一个简单的点采样过程来适应各种表面表示。使用自动编码器网络架构,我们将输入点云表面编码为具有细粒度潜在空间的稀疏体素网格,并将其解码为基于sdf的表面几何形状和交叉场(见图)。我们还提供了一个具有高质量签名距离域(sdf)表示及其相应交叉域的模型数据集,并使用它来训练我们的网络。经过训练后,该网络能够以前馈方式计算输入表面的交叉场,从而确保高几何保真度、抗噪声能力和快速推理。此外,利用相同的统一潜在表示,我们结合了一个扩散模型,用于计算由部分输入(如草图)生成的新形状的交叉场。为了演示其实际应用,我们在各种表面形状的四网格生成任务上验证了CrossGen。实验结果表明,CrossGen可以很好地泛化各种形状,并始终产生高保真的交叉场,从而有利于生成高质量的四边形网格。
{"title":"CrossGen: Learning and Generating Cross Fields for Quad Meshing","authors":"Qiujie Dong, Jiepeng Wang, Rui Xu, Cheng Lin, Yuan Liu, Shiqing Xin, Zichun Zhong, Xin Li, Changhe Tu, Taku Komura, Leif Kobbelt, Scott Schaefer, Wenping Wang","doi":"10.1145/3763299","DOIUrl":"https://doi.org/10.1145/3763299","url":null,"abstract":"Cross fields play a critical role in various geometry processing tasks, especially for quad mesh generation. Existing methods for cross field generation often struggle to balance computational efficiency with generation quality, using slow per-shape optimization. We introduce <jats:italic toggle=\"yes\">CrossGen</jats:italic> , a novel framework that supports both feed-forward prediction and latent generative modeling of cross fields for quad meshing by unifying geometry and cross field representations within a joint latent space. Our method enables extremely fast computation of high-quality cross fields of general input shapes, typically within one second without per-shape optimization. Our method assumes a point-sampled surface, also called a <jats:italic toggle=\"yes\">point-cloud surface</jats:italic> , as input, so we can accommodate various surface representations by a straightforward point sampling process. Using an auto-encoder network architecture, we encode input point-cloud surfaces into a sparse voxel grid with fine-grained latent spaces, which are decoded into both SDF-based surface geometry and cross fields (see the teaser figure). We also contribute a dataset of models with both high-quality signed distance fields (SDFs) representations and their corresponding cross fields, and use it to train our network. Once trained, the network is capable of computing a cross field of an input surface in a feed-forward manner, ensuring high geometric fidelity, noise resilience, and rapid inference. Furthermore, leveraging the same unified latent representation, we incorporate a diffusion model for computing cross fields of new shapes generated from partial input, such as sketches. To demonstrate its practical applications, we validate <jats:italic toggle=\"yes\">CrossGen</jats:italic> on the quad mesh generation task for a large variety of surface shapes. Experimental results demonstrate that <jats:italic toggle=\"yes\">CrossGen</jats:italic> generalizes well across diverse shapes and consistently yields high-fidelity cross fields, thus facilitating the generation of high-quality quad meshes.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"26 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Force-Dual Modes: Subspace Design from Stochastic Forces 力-对偶模:随机力的子空间设计
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763310
Otman Benchekroun, Eitan Grinspun, Maurizio Chiaramonte, Philip Allen Etter
Designing subspaces for Reduced Order Modeling (ROM) is crucial for accelerating finite element simulations in graphics and engineering. Unfortunately, it's not always clear which subspace is optimal for arbitrary dynamic simulation. We propose to construct simulation subspaces from force distributions, allowing us to tailor such subspaces to common scene interactions involving constraint penalties, handles-based control, contact and musculoskeletal actuation. To achieve this we adopt a statistical perspective on Reduced Order Modelling, which allows us to push such user-designed force distributions through a linearized simulation to obtain a dual distribution on displacements. To construct our subspace, we then fit a low-rank Gaussian model to this displacement distribution, which we show generalizes Linear Modal Analysis subspaces for uncorrelated unit variance force distributions, as well as Green's Function subspaces for low rank force distributions. We show our framework allows for the construction of subspaces that are optimal both with respect to physical material properties, as well as arbitrary force distributions as observed in handle-based, contact, and musculoskeletal scene interactions.
为降阶建模(ROM)设计子空间对于加速图形学和工程中的有限元仿真是至关重要的。不幸的是,对于任意动态模拟,哪一个子空间是最优的并不总是很清楚。我们建议从力分布构建仿真子空间,允许我们将这些子空间定制为涉及约束惩罚、基于手柄的控制、接触和肌肉骨骼驱动的常见场景交互。为了实现这一点,我们采用了降阶建模的统计角度,这使我们能够通过线性化模拟来推动这种用户设计的力分布,以获得位移的双重分布。为了构建我们的子空间,我们将一个低秩高斯模型拟合到这个位移分布,我们展示了非相关单位方差力分布的广义线性模态分析子空间,以及低秩力分布的格林函数子空间。我们展示了我们的框架允许构建子空间,这些子空间在物理材料特性以及在基于手柄的、接触的和肌肉骨骼场景交互中观察到的任意力分布方面都是最佳的。
{"title":"Force-Dual Modes: Subspace Design from Stochastic Forces","authors":"Otman Benchekroun, Eitan Grinspun, Maurizio Chiaramonte, Philip Allen Etter","doi":"10.1145/3763310","DOIUrl":"https://doi.org/10.1145/3763310","url":null,"abstract":"Designing subspaces for Reduced Order Modeling (ROM) is crucial for accelerating finite element simulations in graphics and engineering. Unfortunately, it's not always clear which subspace is optimal for arbitrary dynamic simulation. We propose to construct simulation subspaces from force distributions, allowing us to tailor such subspaces to common scene interactions involving constraint penalties, handles-based control, contact and musculoskeletal actuation. To achieve this we adopt a statistical perspective on Reduced Order Modelling, which allows us to push such user-designed force distributions through a linearized simulation to obtain a dual distribution on displacements. To construct our subspace, we then fit a low-rank Gaussian model to this displacement distribution, which we show generalizes Linear Modal Analysis subspaces for uncorrelated unit variance force distributions, as well as Green's Function subspaces for low rank force distributions. We show our framework allows for the construction of subspaces that are optimal both with respect to physical material properties, as well as arbitrary force distributions as observed in handle-based, contact, and musculoskeletal scene interactions.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"110 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1