首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
StyleMM: Stylized 3D Morphable Face Model via Text-Driven Aligned Image Translation StyleMM:通过文本驱动的对齐图像转换程式化的3D变形面部模型
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-11 DOI: 10.1111/cgf.70234
Seungmi Lee, Kwan Yun, Junyong Noh

We introduce StyleMM, a novel framework that can construct a stylized 3D Morphable Model (3DMM) based on user-defined text descriptions specifying a target style. Building upon a pre-trained mesh deformation network and a texture generator for original 3DMM-based realistic human faces, our approach fine-tunes these models using stylized facial images generated via text-guided image-to-image (i2i) translation with a diffusion model, which serve as stylization targets for the rendered mesh. To prevent undesired changes in identity, facial alignment, or expressions during i2i translation, we introduce a stylization method that explicitly preserves the facial attributes of the source image. By maintaining these critical attributes during image stylization, the proposed approach ensures consistent 3D style transfer across the 3DMM parameter space through image-based training. Once trained, StyleMM enables feed-forward generation of stylized face meshes with explicit control over shape, expression, and texture parameters, producing meshes with consistent vertex connectivity and animatability. Quantitative and qualitative evaluations demonstrate that our approach outperforms state-of-the-art methods in terms of identity-level facial diversity and stylization capability. The code and videos are available at kwanyun.github.io/stylemm_page.

Categories and Subject Descriptors (according to ACM CCS): I.3.6 [Computer Graphics]: Methodology and Techniques—

我们介绍了一种新的框架StyleMM,它可以基于用户定义的指定目标样式的文本描述来构建风格化的3D变形模型(3DMM)。基于预训练的网格变形网络和原始的基于3dmm的逼真人脸纹理生成器,我们的方法使用通过文本引导图像到图像(i2i)转换生成的带有扩散模型的程式化面部图像对这些模型进行微调,该模型作为渲染网格的程式化目标。为了防止在i2i翻译过程中出现身份、面部对齐或表情方面的不希望的变化,我们引入了一种程式化方法,该方法明确地保留了源图像的面部属性。通过在图像风格化过程中保持这些关键属性,所提出的方法通过基于图像的训练确保在整个3DMM参数空间中保持一致的3D风格转移。一旦训练,StyleMM能够前馈生成程式化的面部网格,明确控制形状,表情和纹理参数,产生具有一致顶点连接和可动画性的网格。定量和定性评估表明,我们的方法在身份水平的面部多样性和风格化能力方面优于最先进的方法。代码和视频可在kwanun .github.io/stylemm_page获得。类别和主题描述符(根据ACM CCS): I.3.6[计算机图形学]:方法和技术-
{"title":"StyleMM: Stylized 3D Morphable Face Model via Text-Driven Aligned Image Translation","authors":"Seungmi Lee,&nbsp;Kwan Yun,&nbsp;Junyong Noh","doi":"10.1111/cgf.70234","DOIUrl":"https://doi.org/10.1111/cgf.70234","url":null,"abstract":"<p>We introduce StyleMM, a novel framework that can construct a stylized 3D Morphable Model (3DMM) based on user-defined text descriptions specifying a target style. Building upon a pre-trained mesh deformation network and a texture generator for original 3DMM-based realistic human faces, our approach fine-tunes these models using stylized facial images generated via text-guided image-to-image (i2i) translation with a diffusion model, which serve as stylization targets for the rendered mesh. To prevent undesired changes in identity, facial alignment, or expressions during i2i translation, we introduce a stylization method that explicitly preserves the facial attributes of the source image. By maintaining these critical attributes during image stylization, the proposed approach ensures consistent 3D style transfer across the 3DMM parameter space through image-based training. Once trained, StyleMM enables feed-forward generation of stylized face meshes with explicit control over shape, expression, and texture parameters, producing meshes with consistent vertex connectivity and animatability. Quantitative and qualitative evaluations demonstrate that our approach outperforms state-of-the-art methods in terms of identity-level facial diversity and stylization capability. The code and videos are available at kwanyun.github.io/stylemm_page.</p><p>Categories and Subject Descriptors (according to ACM CCS): I.3.6 [Computer Graphics]: Methodology and Techniques—</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70234","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Reconstruction of Woven Cloth from a Single Close-up Image 从单个特写图像自动重建织物
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-08 DOI: 10.1111/cgf.70243
C. Wu, A. Khattar, J. Zhu, S. Pettifer, L. Yan, Z. Montazeri

Digital replication of woven fabrics presents significant challenges across a variety of sectors, from online retail to entertainment industries. To address this, we introduce an inverse rendering pipeline designed to estimate pattern, geometry, and appearance parameters of woven fabrics given a single close-up image as input. Our work is capable of simultaneously optimizing both discrete and continuous parameters without manual interventions. It outputs a wide array of parameters, encompassing discrete elements like weave patterns, ply and fiber number, using Simulated Annealing. It also recovers continuous parameters such as reflection and transmission components, aligning them with the target appearance through differentiable rendering. For irregularities caused by deformation and flyaways, we use 2D Gaussians to approximate them as a post-processing step. Our work does not pursue perfect matching of all fine details, it targets an automatic and end-to-end reconstruction pipeline that is robust to slight camera rotations and room light conditions within an acceptable time (15 minutes on CPU), unlike previous works which are either expensive, require manual intervention, assume given pattern, geometry or appearance, or strictly control camera and light conditions.

从在线零售到娱乐行业,机织织物的数字复制在各个领域都面临着重大挑战。为了解决这个问题,我们引入了一个逆向渲染管道,该管道设计用于在给定一个特写图像作为输入的情况下估计织物的图案、几何形状和外观参数。我们的工作能够同时优化离散和连续参数,无需人工干预。它输出广泛的参数阵列,包括离散元素,如编织图案,厚度和纤维数,使用模拟退火。它还恢复连续参数,如反射和透射分量,并通过可微分渲染将其与目标外观对齐。对于由变形和飞行引起的不规则性,我们使用二维高斯来近似它们作为后处理步骤。我们的工作并不追求所有细节的完美匹配,它的目标是一个自动和端到端重建管道,在可接受的时间内(CPU上15分钟)对轻微的相机旋转和房间光线条件具有强大的功能,不像以前的作品,这些作品要么昂贵,需要人工干预,假设给定的图案,几何形状或外观,或严格控制相机和光线条件。
{"title":"Automatic Reconstruction of Woven Cloth from a Single Close-up Image","authors":"C. Wu,&nbsp;A. Khattar,&nbsp;J. Zhu,&nbsp;S. Pettifer,&nbsp;L. Yan,&nbsp;Z. Montazeri","doi":"10.1111/cgf.70243","DOIUrl":"https://doi.org/10.1111/cgf.70243","url":null,"abstract":"<p>Digital replication of woven fabrics presents significant challenges across a variety of sectors, from online retail to entertainment industries. To address this, we introduce an inverse rendering pipeline designed to estimate pattern, geometry, and appearance parameters of woven fabrics given a single close-up image as input. Our work is capable of simultaneously optimizing both discrete and continuous parameters without manual interventions. It outputs a wide array of parameters, encompassing discrete elements like weave patterns, ply and fiber number, using Simulated Annealing. It also recovers continuous parameters such as reflection and transmission components, aligning them with the target appearance through differentiable rendering. For irregularities caused by deformation and flyaways, we use 2D Gaussians to approximate them as a post-processing step. Our work does not pursue perfect matching of all fine details, it targets an automatic and end-to-end reconstruction pipeline that is robust to slight camera rotations and room light conditions within an acceptable time (15 minutes on CPU), unlike previous works which are either expensive, require manual intervention, assume given pattern, geometry or appearance, or strictly control camera and light conditions.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70243","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Design of Body-Supporting Assemblies 车身支撑组件的计算设计
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-08 DOI: 10.1111/cgf.70237
Yixuan He, Rulin Chen, Bailin Deng, Peng Song

A body-supporting assembly is an assembly of parts that physically supports a human body during activities like sitting, lying, or leaning. A body-supporting assembly has a complex global shape to support a specific human body posture, yet each component part has a relatively simple geometry to facilitate fabrication, storage, and maintenance. In this paper, we aim to model and design a personalized body-supporting assembly that fits a given human body posture, aiming to make the assembly comfortable to use. We choose to model a body-supporting assembly from scratch to offer high flexibility for fitting a given body posture, which however makes it challenging to determine the assembly's topology and geometry. To address this problem, we classify parts in the assembly into two categories according the functionality: supporting parts for fitting different portions of the body and connecting parts for connecting all the supporting parts to form a stable structure. We also propose a geometric representation of supporting parts such that they can have a variety of shapes controlled by a few parameters. Given a body posture as input, we present a computational approach for designing a body-supporting assembly that fits the posture, in which the supporting parts are initialized and optimized to minimize a discomfort measure and then the connecting parts are generated using a procedural approach. We demonstrate the effectiveness of our approach by designing body-supporting assemblies that accommodate to a variety of body postures and 3D printing two of them for physical validation.

身体支撑组件是指在坐、躺或倚等活动中支撑人体的部件的组合。人体支撑组件具有复杂的整体形状,以支持特定的人体姿势,但每个组件部件具有相对简单的几何形状,以方便制造,存储和维护。在本文中,我们的目标是建模和设计一个个性化的身体支撑组件,以适应给定的人体姿势,旨在使组件使用舒适。我们选择从头开始建模一个身体支撑组件,以提供适合给定身体姿势的高灵活性,然而,这使得确定组件的拓扑结构和几何形状具有挑战性。为了解决这个问题,我们根据功能将装配中的零件分为两类:用于装配身体不同部位的支撑部件和连接部件,用于连接所有支撑部件以形成稳定的结构。我们还提出了支撑部件的几何表示,这样它们就可以由几个参数控制各种形状。在给定身体姿势作为输入的情况下,我们提出了一种设计适合该姿势的身体支撑组件的计算方法,其中支撑部件被初始化和优化以最小化不适测量,然后使用程序方法生成连接部件。我们通过设计适应各种身体姿势的身体支撑组件和3D打印其中两个进行物理验证来证明我们方法的有效性。
{"title":"Computational Design of Body-Supporting Assemblies","authors":"Yixuan He,&nbsp;Rulin Chen,&nbsp;Bailin Deng,&nbsp;Peng Song","doi":"10.1111/cgf.70237","DOIUrl":"https://doi.org/10.1111/cgf.70237","url":null,"abstract":"<p>A <i>body-supporting assembly</i> is an assembly of parts that physically supports a human body during activities like sitting, lying, or leaning. A body-supporting assembly has a complex global shape to support a specific human body posture, yet each component part has a relatively simple geometry to facilitate fabrication, storage, and maintenance. In this paper, we aim to model and design a personalized body-supporting assembly that fits a given human body posture, aiming to make the assembly comfortable to use. We choose to model a body-supporting assembly from scratch to offer high flexibility for fitting a given body posture, which however makes it challenging to determine the assembly's topology and geometry. To address this problem, we classify parts in the assembly into two categories according the functionality: <i>supporting parts</i> for fitting different portions of the body and <i>connecting parts</i> for connecting all the supporting parts to form a stable structure. We also propose a geometric representation of supporting parts such that they can have a variety of shapes controlled by a few parameters. Given a body posture as input, we present a computational approach for designing a body-supporting assembly that fits the posture, in which the supporting parts are initialized and optimized to minimize a discomfort measure and then the connecting parts are generated using a procedural approach. We demonstrate the effectiveness of our approach by designing body-supporting assemblies that accommodate to a variety of body postures and 3D printing two of them for physical validation.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAATSim: Depth-Aware Atmospheric Turbulence Simulation for Fast Image Rendering DAATSim:深度感知大气湍流模拟快速图像渲染
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-08 DOI: 10.1111/cgf.70241
Ripon Kumar Saha, Yufan Zhang, Jinwei Ye, Suren Jayasuriya

Simulating the effects of atmospheric turbulence for imaging systems operating over long distances is a significant challenge for optical and computer graphics models. Physically-based ray tracing over kilometers of distance is difficult due to the need to define a spatio-temporal volume of varying refractive index. Even if such a volume can be defined, Monte Carlo rendering approximations for light refraction through the environment would not yield real-time solutions needed for video game engines or online dataset augmentation for machine learning. While existing simulators based on procedurally-generated noise or textures have been proposed in these settings, these simulators often neglect the significant impact of scene depth, leading to unrealistic degradations for scenes with substantial foreground-background separation. This paper introduces a novel, physically-based atmospheric turbulence simulator that explicitly models depth-dependent effects while rendering frames at interactive/near real-time (> 10 FPS) rates for image resolutions up to 1024 × 1024 (real-time 35 FPS at 256× 256 resolution with depth or 512×512 at 33 FPS without depth). Our hybrid approach combines spatially-varying wavefront aberrations using Zernike polynomials with pixel-wise depth modulation of both blur (via Point Spread Function interpolation) and geometric distortion or tilt. Our approach includes a novel fusion technique that integrates complementary strengths of leading monocular depth estimators to generate metrically accurate depth maps with enhanced edge fidelity. DAATSim is implemented efficiently on GPUs using Py-Torch incorporating optimizations like mixed-precision computation and caching to achieve efficient performance. We present quantitative and qualitative validation demonstrating the simulator's physical plausibility for generating turbulent video. DAAT-Sim is made publicly available and open-source to the community: https://github.com/Riponcs/DAATSim.

模拟大气湍流对远距离成像系统的影响是光学和计算机图形模型面临的一个重大挑战。由于需要定义不同折射率的时空体积,因此基于物理的光线追踪在公里距离上是困难的。即使可以定义这样的体积,光线通过环境折射的蒙特卡罗渲染近似也无法产生视频游戏引擎或机器学习在线数据集增强所需的实时解决方案。虽然现有的基于程序生成的噪声或纹理的模拟器已经在这些设置中提出,但这些模拟器往往忽略了场景深度的重要影响,导致前景和背景分离的场景出现不切实际的降级。本文介绍了一种新颖的、基于物理的大气湍流模拟器,它明确地模拟了深度依赖效应,同时以交互/近实时(> 10 FPS)速率渲染帧,图像分辨率高达1024 × 1024(实时35 FPS, 256× 256分辨率,深度或512×512, 33 FPS,无深度)。我们的混合方法结合了空间变化的波前像差,使用泽尼克多项式和像素级深度调制的模糊(通过点扩展函数插值)和几何畸变或倾斜。我们的方法包括一种新的融合技术,该技术集成了领先的单目深度估计器的互补优势,以生成具有增强边缘保真度的度量精度的深度图。DAATSim在gpu上高效实现,使用Py-Torch结合混合精度计算和缓存等优化来实现高效性能。我们提出了定量和定性验证,证明了模拟器的物理合理性,以产生湍流视频。DAAT-Sim对社区是公开可用和开源的:https://github.com/Riponcs/DAATSim。
{"title":"DAATSim: Depth-Aware Atmospheric Turbulence Simulation for Fast Image Rendering","authors":"Ripon Kumar Saha,&nbsp;Yufan Zhang,&nbsp;Jinwei Ye,&nbsp;Suren Jayasuriya","doi":"10.1111/cgf.70241","DOIUrl":"https://doi.org/10.1111/cgf.70241","url":null,"abstract":"<p>Simulating the effects of atmospheric turbulence for imaging systems operating over long distances is a significant challenge for optical and computer graphics models. Physically-based ray tracing over kilometers of distance is difficult due to the need to define a spatio-temporal volume of varying refractive index. Even if such a volume can be defined, Monte Carlo rendering approximations for light refraction through the environment would not yield real-time solutions needed for video game engines or online dataset augmentation for machine learning. While existing simulators based on procedurally-generated noise or textures have been proposed in these settings, these simulators often neglect the significant impact of scene depth, leading to unrealistic degradations for scenes with substantial foreground-background separation. This paper introduces a novel, physically-based atmospheric turbulence simulator that explicitly models depth-dependent effects while rendering frames at interactive/near real-time (&gt; <i>10</i> FPS) rates for image resolutions up to <i>1024</i> × <i>1024</i> (real-time <i>35</i> FPS at <i>256× 256</i> resolution with depth or <i>512×512</i> at <i>33</i> FPS without depth). Our hybrid approach combines spatially-varying wavefront aberrations using Zernike polynomials with pixel-wise depth modulation of both blur (via Point Spread Function interpolation) and geometric distortion or tilt. Our approach includes a novel fusion technique that integrates complementary strengths of leading monocular depth estimators to generate metrically accurate depth maps with enhanced edge fidelity. DAATSim is implemented efficiently on GPUs using Py-Torch incorporating optimizations like mixed-precision computation and caching to achieve efficient performance. We present quantitative and qualitative validation demonstrating the simulator's physical plausibility for generating turbulent video. DAAT-Sim is made publicly available and open-source to the community: https://github.com/Riponcs/DAATSim.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TensoIS: A Step Towards Feed-Forward Tensorial Inverse Subsurface Scattering for Perlin Distributed Heterogeneous Media 张量:迈向柏林分布非均质介质前馈张量逆次表面散射的一步
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-08 DOI: 10.1111/cgf.70242
Ashish Tiwari, Satyam Bhardwaj, Yash Bachwana, Parag Sarvoday Sahu, T.M.Feroz Ali, Bhargava Chintalapati, Shanmuganathan Raman

Estimating scattering parameters of heterogeneous media from images is a severely under-constrained and challenging problem. Most of the existing approaches model BSSRDF either through an analysis-by-synthesis approach, approximating complex path integrals, or using differentiable volume rendering techniques to account for heterogeneity. However, only a few studies have applied learning-based methods to estimate subsurface scattering parameters, but they assume homogeneous media. Interestingly, no specific distribution is known to us that can explicitly model the heterogeneous scattering parameters in the real world. Notably, procedural noise models such as Perlin and Fractal Perlin noise have been effective in representing intricate heterogeneities of natural, organic, and inorganic surfaces. Leveraging this, we first create HeteroSynth, a synthetic dataset comprising photorealistic images of heterogeneous media whose scattering parameters are modeled using Fractal Perlin noise. Furthermore, we propose Tensorial Inverse Scattering (TensoIS), a learning-based feed-forward framework to estimate these Perlin-distributed heterogeneous scattering parameters from sparse multi-view image observations. Instead of directly predicting the 3D scattering parameter volume, TensoIS uses learnable low-rank tensor components to represent the scattering volume. We evaluate TensoIS on unseen heterogeneous variations over shapes from the HeteroSynth test set, smoke and cloud geometries obtained from open-source realistic volumetric simulations, and some real-world samples to establish its effectiveness for inverse scattering. Overall, this study is an attempt to explore Perlin noise distribution, given the lack of any such well-defined distribution in literature, to potentially model real-world heterogeneous scattering in a feed-forward manner.

Project Page: https://yashbachwana.github.io/TensoIS/

从图像中估计非均匀介质的散射参数是一个严重缺乏约束且具有挑战性的问题。大多数现有的BSSRDF建模方法要么通过综合分析方法、近似复杂路径积分,要么使用可微体呈现技术来解释异构性。然而,只有少数研究应用了基于学习的方法来估计地下散射参数,但它们都假设了均匀介质。有趣的是,我们还不知道具体的分布可以明确地模拟现实世界中的非均匀散射参数。值得注意的是,程序噪声模型,如Perlin和分形Perlin噪声,在表示自然、有机和无机表面的复杂异质性方面是有效的。利用这一点,我们首先创建了HeteroSynth,这是一个合成数据集,包括异构介质的逼真图像,其散射参数使用分形柏林噪声建模。此外,我们提出了张sorial逆散射(TensoIS),这是一种基于学习的前馈框架,用于从稀疏多视图图像观测中估计这些柏林分布的非均匀散射参数。而不是直接预测三维散射参数体积,TensoIS使用可学习的低秩张量分量来表示散射体积。我们对来自HeteroSynth测试集的看不见的异质形状变化、来自开源真实体积模拟的烟雾和云几何形状以及一些真实世界的样本进行了评估,以确定其对逆散射的有效性。总的来说,鉴于文献中缺乏这种定义明确的分布,本研究试图探索柏林噪声分布,以前馈方式潜在地模拟现实世界的非均匀散射。项目页面:https://yashbachwana.github.io/TensoIS/
{"title":"TensoIS: A Step Towards Feed-Forward Tensorial Inverse Subsurface Scattering for Perlin Distributed Heterogeneous Media","authors":"Ashish Tiwari,&nbsp;Satyam Bhardwaj,&nbsp;Yash Bachwana,&nbsp;Parag Sarvoday Sahu,&nbsp;T.M.Feroz Ali,&nbsp;Bhargava Chintalapati,&nbsp;Shanmuganathan Raman","doi":"10.1111/cgf.70242","DOIUrl":"https://doi.org/10.1111/cgf.70242","url":null,"abstract":"<p>Estimating scattering parameters of heterogeneous media from images is a severely under-constrained and challenging problem. Most of the existing approaches model BSSRDF either through an analysis-by-synthesis approach, approximating complex path integrals, or using differentiable volume rendering techniques to account for heterogeneity. However, only a few studies have applied learning-based methods to estimate subsurface scattering parameters, but they assume homogeneous media. Interestingly, no specific distribution is known to us that can explicitly model the heterogeneous scattering parameters in the real world. Notably, procedural noise models such as Perlin and Fractal Perlin noise have been effective in representing intricate heterogeneities of natural, organic, and inorganic surfaces. Leveraging this, we first create HeteroSynth, a synthetic dataset comprising photorealistic images of heterogeneous media whose scattering parameters are modeled using Fractal Perlin noise. Furthermore, we propose Tensorial Inverse Scattering (TensoIS), a learning-based feed-forward framework to estimate these Perlin-distributed heterogeneous scattering parameters from sparse multi-view image observations. Instead of directly predicting the 3D scattering parameter volume, TensoIS uses learnable low-rank tensor components to represent the scattering volume. We evaluate TensoIS on unseen heterogeneous variations over shapes from the HeteroSynth test set, smoke and cloud geometries obtained from open-source realistic volumetric simulations, and some real-world samples to establish its effectiveness for inverse scattering. Overall, this study is an attempt to explore Perlin noise distribution, given the lack of any such well-defined distribution in literature, to potentially model real-world heterogeneous scattering in a feed-forward manner.</p><p>Project Page: https://yashbachwana.github.io/TensoIS/</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Performance Elliptical Cone Tracing 高性能椭圆锥跟踪
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-06 DOI: 10.1111/cgf.70230
U. Emre, A. Kanak, S. Steinberg

In this work, we discuss elliptical cone traversal in scenes that employ typical triangular meshes. We derive accurate and numerically-stable intersection tests for an elliptical conic frustum with an AABB, plane, edge and a triangle, and analyze the performance of elliptical cone tracing when using different acceleration data structures: SAH-based K-d trees, BVHs as well as a modern 8-wide BVH variant adapted for cone tracing, and compare with ray tracing. In addition, several cone traversal algorithms are analyzed, and we develop novel heuristics and optimizations that give better performance than previous traversal approaches. The results highlight the difference in performance characteristics between rays and cones, and serve to guide the design of acceleration data structures for applications that employ cone tracing.

在这项工作中,我们讨论了使用典型三角形网格的场景中的椭圆锥遍历。本文推导了椭圆锥台与AABB、平面、边和三角形的精确且数值稳定的相交试验,分析了不同加速数据结构(基于sah的K-d树、BVH和适用于圆锥跟踪的现代8宽BVH)的椭圆锥跟踪性能,并与射线跟踪进行了比较。此外,我们还分析了几种锥遍历算法,并开发了新的启发式算法和优化算法,这些算法比以前的遍历方法具有更好的性能。结果突出了射线和锥的性能特征差异,并有助于指导采用锥跟踪的应用程序的加速数据结构设计。
{"title":"High-Performance Elliptical Cone Tracing","authors":"U. Emre,&nbsp;A. Kanak,&nbsp;S. Steinberg","doi":"10.1111/cgf.70230","DOIUrl":"https://doi.org/10.1111/cgf.70230","url":null,"abstract":"<p>In this work, we discuss <i>elliptical cone</i> traversal in scenes that employ typical triangular meshes. We derive accurate and numerically-stable intersection tests for an elliptical conic frustum with an AABB, plane, edge and a triangle, and analyze the performance of elliptical cone tracing when using different acceleration data structures: SAH-based K-d trees, BVHs as well as a modern 8-wide BVH variant adapted for cone tracing, and compare with ray tracing. In addition, several cone traversal algorithms are analyzed, and we develop novel heuristics and optimizations that give better performance than previous traversal approaches. The results highlight the difference in performance characteristics between rays and cones, and serve to guide the design of acceleration data structures for applications that employ cone tracing.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70230","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IPFNet: Implicit Primitive Fitting for Robust Point Cloud Segmentation IPFNet:隐式原语拟合的鲁棒点云分割
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-06 DOI: 10.1111/cgf.70231
Shengdi Zhou, Xiaoqiang Zan, Bin Zhou

The segmentation and fitting of geometric primitives from point clouds is a widely adopted approach for modelling the underlying geometric structure of objects in reverse engineering and numerous graphics applications. Existing methods either overlook the role of geometric information in assisting segmentation or incorporate reconstruction losses without leveraging modern neural implicit field representations, leading to limited robustness against noise and weak expressive power in reconstruction. We propose a point cloud segmentation and fitting framework based on neural implicit representations, fully leveraging neural implicit fields' expressive power and robustness. The key idea is the unification of geometric representation within a neural implicit field framework, enabling seamless integration of geometric loss for improved performance. In contrast to previous approaches that focus solely on clustering in the feature embedding space, our method enhances instance segmentation through semantic-aware point embeddings and simultaneously improves semantic predictions via instance-level feature fusion. Furthermore, we incorporate 3D-specific cues such as spatial dimensions and geometric connectivity, which are uniquely informative in the 3D domain. Extensive experiments and comparisons against previous methods demonstrate our robustness and superiority.

在逆向工程和许多图形应用中,从点云中分割和拟合几何基元是一种被广泛采用的建模对象底层几何结构的方法。现有的方法要么忽略了几何信息在辅助分割中的作用,要么在没有利用现代神经隐式域表示的情况下纳入重建损失,导致对噪声的鲁棒性有限,重建中的表达能力较弱。我们提出了一种基于神经隐式表示的点云分割和拟合框架,充分利用神经隐式域的表达能力和鲁棒性。关键思想是在神经隐式域框架内统一几何表示,从而实现几何损失的无缝集成以提高性能。与以往的方法只关注特征嵌入空间中的聚类不同,我们的方法通过语义感知的点嵌入增强了实例分割,同时通过实例级特征融合改进了语义预测。此外,我们还结合了3D特定的线索,如空间维度和几何连接,这些线索在3D领域具有独特的信息。大量的实验和与以往方法的比较证明了我们的鲁棒性和优越性。
{"title":"IPFNet: Implicit Primitive Fitting for Robust Point Cloud Segmentation","authors":"Shengdi Zhou,&nbsp;Xiaoqiang Zan,&nbsp;Bin Zhou","doi":"10.1111/cgf.70231","DOIUrl":"https://doi.org/10.1111/cgf.70231","url":null,"abstract":"<p>The segmentation and fitting of geometric primitives from point clouds is a widely adopted approach for modelling the underlying geometric structure of objects in reverse engineering and numerous graphics applications. Existing methods either overlook the role of geometric information in assisting segmentation or incorporate reconstruction losses without leveraging modern neural implicit field representations, leading to limited robustness against noise and weak expressive power in reconstruction. We propose a point cloud segmentation and fitting framework based on neural implicit representations, fully leveraging neural implicit fields' expressive power and robustness. The key idea is the unification of geometric representation within a neural implicit field framework, enabling seamless integration of geometric loss for improved performance. In contrast to previous approaches that focus solely on clustering in the feature embedding space, our method enhances instance segmentation through semantic-aware point embeddings and simultaneously improves semantic predictions via instance-level feature fusion. Furthermore, we incorporate 3D-specific cues such as spatial dimensions and geometric connectivity, which are uniquely informative in the 3D domain. Extensive experiments and comparisons against previous methods demonstrate our robustness and superiority.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-Line Drawing Vectorization 单线绘制矢量化
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-06 DOI: 10.1111/cgf.70228
Tanguy Magne, Olga Sorkine-Hornung

Vectorizing line drawings is a repetitive, yet necessary task that professional creatives must perform to obtain an easily editable and scalable digital representation of a raster sketch. State-of-the-art automatic methods in this domain can create series of curves that closely fit the appearance of the drawing. However, they often neglect the line parameterization. Thus, their vector representation cannot be edited naturally by following the drawing order. We present a novel method for single-line drawing vectorization that addresses this issue. Single-line drawings consist of a single stroke, where the line can intersect itself multiple times, making the drawing order non-trivial to recover. Our method fits a single parametric curve, represented as a Bézier spline, to approximate the stroke in the input raster image. To this end, we produce a graph representation of the input and employ geometric priors and a specially trained neural network to correctly capture and classify curve intersections and their traversal configuration. Our method is easily extended to drawings containing multiple strokes while preserving their integrity and order. We compare our vectorized results with the work of several artists, showing that our stroke order is similar to the one artists employ naturally. Our vectorization method achieves state-of-the-art results in terms of similarity with the original drawing and quality of the vectorization on a benchmark of single-line drawings. Our method's results can be refined interactively, making it easy to integrate into professional workflows. Our code and results are available at https://github.com/tanguymagne/SLD-Vectorization.

向量化线条图是一个重复的,但必要的任务,专业创意人员必须执行,以获得一个易于编辑和可扩展的栅格草图的数字表示。该领域最先进的自动方法可以创建一系列与图纸外观紧密贴合的曲线。然而,它们往往忽略了线参数化。因此,它们的矢量表示不能按照绘图顺序自然地编辑。我们提出了一种新的单线绘图矢量化方法来解决这个问题。单线绘图由单个笔画组成,其中线条可以多次相交,使得绘图顺序不容易恢复。我们的方法拟合单一参数曲线,表示为bsamizier样条,以近似输入光栅图像中的笔画。为此,我们生成输入的图形表示,并使用几何先验和经过特殊训练的神经网络来正确捕获和分类曲线交叉点及其遍历配置。我们的方法很容易扩展到包含多个笔画的绘图,同时保持它们的完整性和顺序。我们将矢量化的结果与几位艺术家的作品进行比较,表明我们的笔画顺序与艺术家自然使用的笔画顺序相似。我们的矢量化方法在与原始绘图的相似性和单线绘图基准的矢量化质量方面达到了最先进的结果。我们的方法的结果可以交互式地改进,使其易于集成到专业工作流程中。我们的代码和结果可在https://github.com/tanguymagne/SLD-Vectorization上获得。
{"title":"Single-Line Drawing Vectorization","authors":"Tanguy Magne,&nbsp;Olga Sorkine-Hornung","doi":"10.1111/cgf.70228","DOIUrl":"https://doi.org/10.1111/cgf.70228","url":null,"abstract":"<p>Vectorizing line drawings is a repetitive, yet necessary task that professional creatives must perform to obtain an easily editable and scalable digital representation of a raster sketch. State-of-the-art automatic methods in this domain can create series of curves that closely fit the appearance of the drawing. However, they often neglect the line parameterization. Thus, their vector representation cannot be edited naturally by following the drawing order. We present a novel method for single-line drawing vectorization that addresses this issue. Single-line drawings consist of a single stroke, where the line can intersect itself multiple times, making the drawing order non-trivial to recover. Our method fits a <i>single</i> parametric curve, represented as a Bézier spline, to approximate the stroke in the input raster image. To this end, we produce a graph representation of the input and employ geometric priors and a specially trained neural network to correctly capture and classify curve intersections and their traversal configuration. Our method is easily extended to drawings containing multiple strokes while preserving their integrity and order. We compare our vectorized results with the work of several artists, showing that our stroke order is similar to the one artists employ naturally. Our vectorization method achieves state-of-the-art results in terms of similarity with the original drawing and quality of the vectorization on a benchmark of single-line drawings. Our method's results can be refined interactively, making it easy to integrate into professional workflows. Our code and results are available at https://github.com/tanguymagne/SLD-Vectorization.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70228","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Neural Denoising for Volume Rendering Using Dual-Input Feature Fusion Network 基于双输入特征融合网络的体绘制实时神经去噪
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-16 DOI: 10.1111/cgf.70276
Chunxiao Xu, Xinran Xu, Jiatian Zhang, Yufei Liu, Yiheng Cao, Lingxiao Zhao

Direct volume rendering (DVR) is a widely used technique in the visualisation of volumetric data. As an important DVR technique, volumetric path tracing (VPT) simulates light transport to produce realistic rendering results, which provides enhanced perception and understanding for users, especially in the field of medical imaging. VPT, based on the Monte Carlo (MC) method, typically requires a large number of samples to generate noise-free results. However, in real-time applications, only a limited number of samples per pixel is allowed and significant noise can be created. This paper introduces a novel neural denoising approach that utilises a new feature fusion method for VPT. Our method uses a feature decomposition technique that separates radiance into components according to noise levels. Our new decomposition technique mitigates biases found in the contemporary decoupling denoising algorithm and shows better utilisation of samples. A lightweight dual-input network is designed to correlate these components with noise-free ground truth. Additionally, for denoising sequences of video frames, we develop a learning-based temporal method that calculates temporal weight maps, blending reprojected results of previous frames with spatially denoised current frames. Comparative results demonstrate that our network performs faster inference than existing methods and can produce denoised output of higher quality in real time.

直接体绘制(DVR)是一种广泛应用于体数据可视化的技术。体径追踪(volumetric path tracing, VPT)是一种重要的DVR技术,它通过模拟光的传输来产生逼真的渲染结果,为用户提供了增强的感知和理解能力,特别是在医学成像领域。基于蒙特卡罗(MC)方法的VPT通常需要大量的样本来产生无噪声的结果。然而,在实时应用中,每像素只允许有限数量的样本,并且可能产生显著的噪声。本文介绍了一种新的神经去噪方法,该方法利用一种新的特征融合方法对VPT进行去噪。我们的方法使用了一种特征分解技术,根据噪声水平将辐射分成不同的分量。我们的新分解技术减轻了在当代解耦去噪算法中发现的偏差,并显示出更好的样本利用率。设计了一个轻量级的双输入网络,将这些组件与无噪声的地真相关联。此外,对于去噪视频帧序列,我们开发了一种基于学习的时间方法,该方法计算时间权重映射,将前一帧的重投影结果与空间去噪的当前帧混合在一起。对比结果表明,我们的网络比现有方法的推理速度更快,并且可以实时产生更高质量的去噪输出。
{"title":"Real-time Neural Denoising for Volume Rendering Using Dual-Input Feature Fusion Network","authors":"Chunxiao Xu,&nbsp;Xinran Xu,&nbsp;Jiatian Zhang,&nbsp;Yufei Liu,&nbsp;Yiheng Cao,&nbsp;Lingxiao Zhao","doi":"10.1111/cgf.70276","DOIUrl":"https://doi.org/10.1111/cgf.70276","url":null,"abstract":"<p>Direct volume rendering (DVR) is a widely used technique in the visualisation of volumetric data. As an important DVR technique, volumetric path tracing (VPT) simulates light transport to produce realistic rendering results, which provides enhanced perception and understanding for users, especially in the field of medical imaging. VPT, based on the Monte Carlo (MC) method, typically requires a large number of samples to generate noise-free results. However, in real-time applications, only a limited number of samples per pixel is allowed and significant noise can be created. This paper introduces a novel neural denoising approach that utilises a new feature fusion method for VPT. Our method uses a feature decomposition technique that separates radiance into components according to noise levels. Our new decomposition technique mitigates biases found in the contemporary decoupling denoising algorithm and shows better utilisation of samples. A lightweight dual-input network is designed to correlate these components with noise-free ground truth. Additionally, for denoising sequences of video frames, we develop a learning-based temporal method that calculates temporal weight maps, blending reprojected results of previous frames with spatially denoised current frames. Comparative results demonstrate that our network performs faster inference than existing methods and can produce denoised output of higher quality in real time.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 6","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145135446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hi3DFace: High-Realistic 3D Face Reconstruction From a Single Occluded Image Hi3DFace:从单个遮挡图像重建高逼真的3D人脸
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-05 DOI: 10.1111/cgf.70277
Dongjin Huang, Yongsheng Shi, Jiantao Qu, Jinhua Liu, Wen Tang

We propose Hi3DFace, a novel framework for simultaneous de-occlusion and high-fidelity 3D face reconstruction. To address real-world occlusions, we construct a diverse facial dataset by simulating common obstructions and present TMANet, a transformer-based multi-scale attention network that effectively removes occlusions and restores clean face images. For the 3D face reconstruction stage, we propose a coarse-medium-fine self-supervised scheme. In the coarse reconstruction pipeline, we adopt a face regression network to predict 3DMM coefficients for generating a smooth 3D face. In the medium-scale reconstruction pipeline, we propose a novel depth displacement network, DDFTNet, to remove noise and restore rich details to the smooth 3D geometry. In the fine-scale reconstruction pipeline, we design a GCN (graph convolutional network) refiner to enhance the fidelity of 3D textures. Additionally, a light-aware network (LightNet) is proposed to distil lighting parameters, ensuring illumination consistency between reconstructed 3D faces and input images. Extensive experimental results demonstrate that the proposed Hi3DFace significantly outperforms state-of-the-art reconstruction methods on four public datasets, and five constructed occlusion-type datasets. Hi3DFace achieves robustness and effectiveness in removing occlusions and reconstructing 3D faces from real-world occluded facial images.

我们提出Hi3DFace,一个同时去遮挡和高保真三维人脸重建的新框架。为了解决现实世界的遮挡问题,我们通过模拟常见障碍物构建了多样化的面部数据集,并提出了TMANet,这是一种基于变压器的多尺度注意力网络,可以有效地去除遮挡并恢复干净的面部图像。在三维人脸重建阶段,我们提出了一种粗-中-精自监督方案。在粗重建管道中,我们采用人脸回归网络预测3DMM系数,生成光滑的三维人脸。在中等尺度重建管道中,我们提出了一种新的深度位移网络DDFTNet,以去除噪声并将丰富的细节恢复到光滑的三维几何形状。在精细尺度重建管道中,我们设计了一个GCN(图卷积网络)细化器来提高三维纹理的保真度。此外,提出了一种光感知网络(LightNet)来提取照明参数,以确保重建的三维人脸与输入图像之间的照明一致性。大量的实验结果表明,所提出的Hi3DFace在4个公共数据集和5个构建的闭塞类型数据集上明显优于目前最先进的重建方法。Hi3DFace在去除遮挡和从真实世界被遮挡的面部图像重建3D面部方面实现了鲁棒性和有效性。
{"title":"Hi3DFace: High-Realistic 3D Face Reconstruction From a Single Occluded Image","authors":"Dongjin Huang,&nbsp;Yongsheng Shi,&nbsp;Jiantao Qu,&nbsp;Jinhua Liu,&nbsp;Wen Tang","doi":"10.1111/cgf.70277","DOIUrl":"https://doi.org/10.1111/cgf.70277","url":null,"abstract":"<p>We propose Hi3DFace, a novel framework for simultaneous de-occlusion and high-fidelity 3D face reconstruction. To address real-world occlusions, we construct a diverse facial dataset by simulating common obstructions and present TMANet, a transformer-based multi-scale attention network that effectively removes occlusions and restores clean face images. For the 3D face reconstruction stage, we propose a coarse-medium-fine self-supervised scheme. In the coarse reconstruction pipeline, we adopt a face regression network to predict 3DMM coefficients for generating a smooth 3D face. In the medium-scale reconstruction pipeline, we propose a novel depth displacement network, DDFTNet, to remove noise and restore rich details to the smooth 3D geometry. In the fine-scale reconstruction pipeline, we design a GCN (graph convolutional network) refiner to enhance the fidelity of 3D textures. Additionally, a light-aware network (LightNet) is proposed to distil lighting parameters, ensuring illumination consistency between reconstructed 3D faces and input images. Extensive experimental results demonstrate that the proposed Hi3DFace significantly outperforms state-of-the-art reconstruction methods on four public datasets, and five constructed occlusion-type datasets. Hi3DFace achieves robustness and effectiveness in removing occlusions and reconstructing 3D faces from real-world occluded facial images.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 6","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145135248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1