首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
Front Matter 前页
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-04 DOI: 10.1111/cgf.70165
<p>Copenhagen, Denmark</p><p>Beibei Wang - Nanjing University</p><p>Alexander Wilkie - Charles University</p><p><b>Conference Chair</b></p><p>Laurent Belcour - Intel Coporation</p><p>Jiří Bittner - Czech Technical University in Prague</p><p>Tamy Boubekeur - Adobe Research</p><p>Matt Jen-Yuan Chiang - Meta Reality Labs Research</p><p>Valentin Deschaintre - Adobe Research</p><p>Jean-Michel Dischler - ICUBE - Université de Strasbourg</p><p>George Drettakis - INRIA, Université Côte d'Azur</p><p>Farshad Einabadi - University of Surrey</p><p>Arthur Firmino - KeyShot</p><p>Elena Garces - Adobe</p><p>Iliyan Georgiev - Adobe Research</p><p>Abhijeet Ghosh - Imperial College London</p><p>Yotam Gingold - George Mason University</p><p>Pascal Grittmann - Saarland University</p><p>Thorsten Grosch - TU Clausthal</p><p>Adrien Gruson - École de Technologie Supérieure</p><p>Jie Guo - Nanjing University</p><p>Toshiya Hachisuka - University of Waterloo</p><p>David Hahn - TU Wien</p><p>Johannes Hanika - Karlsruhe Institute of Technology</p><p>Milos Hasan - Adobe Research</p><p>Sebastian Herholz - Intel Corporation</p><p>Nicolas Holzschuch - INRIA</p><p>Tomáš Iser - Charles University</p><p>Julian Iseringhausen - Google Research</p><p>Wojciech Jarosz - Dartmouth College</p><p>Alisa Jung - IVD / Karlsruhe Institute of Technology</p><p>Markus Kettunen - NVIDIA</p><p>Manuel Lagunas - Amazon</p><p>Sungkil Lee - Sungkyunkwan University</p><p>Tzu-Mao Li - UC San Diego</p><p>Daqi Lin - NVIDIA</p><p>Jorge Lopez-Moreno - Universidad Rey Juan Carlos</p><p>Steve Marschner - Cornell University</p><p>Daniel Martin - Universidad de Zaragoza</p><p>Bochang Moon - Gwangju Institute of Science and Technology</p><p>Krishna Mullia - Adobe Research</p><p>Jacob Munkberg - NVIDIA Corporation</p><p>Merlin Nimier-David - NVIDIA</p><p>Emilie Nogue - Imperial College London</p><p>Jan Novak - NVIDIA</p><p>Pieter Peers - College of William & Mary</p><p>Christoph Peters - TU Delft</p><p>Matt Pharr - NVIDIA</p><p>Julien Philip - Netflix Eyeline Studios</p><p>Alina Pranovich - Technical University of Denmark</p><p>Marco Salvi - NVIDIA</p><p>Nicolas Savva - Cornell University</p><p>Gurprit Singh - Max-Planck Institute for Informatics, Saarbrücken</p><p>Shlomi Steinberg - University of California Santa Barbara</p><p>Daniel Sýkora - CTU in Prague, FEE</p><p>Natalya Tatarchuk - Activision / Microsoft</p><p>Konstantinos Vardis - Huawei Technologies</p><p>Delio Vicini - Google</p><p>Jiří Vorba - Weta Digital</p><p>Rui Wang - Zhejiang University</p><p>Li-Yi Wei - Adobe Research</p><p>Tien-Tsin Wong - Monash University</p><p>Hongzhi Wu - Zhejiang University</p><p>KUI Wu - LightSpeed Studios</p><p>Lifan Wu - NVIDIA</p><p>Mengqi Xia - Yale University</p><p>Kun Xu - Tsinghua University</p><p>Kai Yan - University of California Irvine</p><p>Ling-Qi Yan - UC Santa Barbara</p><p>Huo Yuchi - Zhejiang University</p><p>Cem Yuksel - University of Utah</p><p>Tizian Zeltner - NVIDIA</p><p>Shuang Zhao - University of
丹麦哥本哈根,beibei Wang -南京大学,alexander Wilkie - Charles大学,会议主席laurent Belcour - Intel CoporationJiří Bittner -捷克布拉格技术大学,amy Boubekeur - Adobe研究员,matt jenen - yuan Chiang, Meta Reality Labs研究员,valentin Deschaintre, Adobe研究员,jean - michel Dischler - ICUBE,斯特斯特堡大学,george Drettakis, INRIA,universit Côte d'AzurFarshad Einabadi -萨里大学arthur Firmino - KeyShotElena Garces - AdobeIliyan Georgiev - Adobe ResearchAbhijeet Ghosh -伦敦帝国理工学院yotam Gingold -乔治梅森大学pascalgritmann - Saarland大学thorsten Grosch - TU ClausthalAdrien Gruson - École de Technologie supsamrieurejie Guo -南京大学toshiya Hachisuka -滑铁卢大学david Hahn - TU WienJohannes Hanika -卡尔斯鲁厄理工学院milos Hasan - Adobe研究员sebastian Herholz -英特尔公司nicolas Holzschuch - INRIATomáš Iser -查尔斯大学julian Iseringhausen - b谷歌研究员wojciech Jarosz -达特茅斯学院alisa Jung - IVD /卡尔斯鲁厄理工学院markus Kettunen - nviamuel Lagunas -亚马逊sungkil Lee -成均馆大学李祖茂-加州大学圣地亚哥分校林大奇- NVIDIAJorge Lopez-Moreno - universitysidad Rey Juan CarlosSteve Marschner -康奈尔大学daniel Martin - universityde zaragozza Moon -光州科学技术研究所krishna Mullia - Adobe ResearchJacob Munkberg - NVIDIA CorporationMerlin Nimier-David - NVIDIA emilie Nogue - Imperial College LondonJan Novak - NVIDIA apieter Peers - College of William &;MaryChristoph Peters - TU DelftMatt farr - NVIDIAJulien Philip - Netflix line StudiosAlina Pranovich -丹麦技术大学marco Salvi - NVIDIANicolas Savva -康奈尔大学gurprit Singh - Max-Planck信息学研究所,saarbr ckenshlomi Steinberg -加州大学圣巴巴拉分校adaniel Sýkora -布拉格CTU,FEENatalya Tatarchuk -动视/微软konstantinos Vardis -华为技术有限公司delio Vicini - GoogleJiří Vorba -维塔数码王锐-浙江大学- Adobe research王天真-莫纳什大学-吴洪志-浙江大学-吴魁-光速工作室-吴力帆- nvidia -夏孟奇-耶鲁大学-徐坤-清华大学-严凯-加州大学欧文分校-严灵奇-加州大学圣巴巴拉分校-霍yuchi -浙江大学- cem Yuksel -犹他大学tizianZeltner - nvidia Zhao huang -加州大学欧文分校朱俊秋-加州大学圣巴巴拉分校
{"title":"Front Matter","authors":"","doi":"10.1111/cgf.70165","DOIUrl":"https://doi.org/10.1111/cgf.70165","url":null,"abstract":"&lt;p&gt;Copenhagen, Denmark&lt;/p&gt;&lt;p&gt;Beibei Wang - Nanjing University&lt;/p&gt;&lt;p&gt;Alexander Wilkie - Charles University&lt;/p&gt;&lt;p&gt;&lt;b&gt;Conference Chair&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Laurent Belcour - Intel Coporation&lt;/p&gt;&lt;p&gt;Jiří Bittner - Czech Technical University in Prague&lt;/p&gt;&lt;p&gt;Tamy Boubekeur - Adobe Research&lt;/p&gt;&lt;p&gt;Matt Jen-Yuan Chiang - Meta Reality Labs Research&lt;/p&gt;&lt;p&gt;Valentin Deschaintre - Adobe Research&lt;/p&gt;&lt;p&gt;Jean-Michel Dischler - ICUBE - Université de Strasbourg&lt;/p&gt;&lt;p&gt;George Drettakis - INRIA, Université Côte d'Azur&lt;/p&gt;&lt;p&gt;Farshad Einabadi - University of Surrey&lt;/p&gt;&lt;p&gt;Arthur Firmino - KeyShot&lt;/p&gt;&lt;p&gt;Elena Garces - Adobe&lt;/p&gt;&lt;p&gt;Iliyan Georgiev - Adobe Research&lt;/p&gt;&lt;p&gt;Abhijeet Ghosh - Imperial College London&lt;/p&gt;&lt;p&gt;Yotam Gingold - George Mason University&lt;/p&gt;&lt;p&gt;Pascal Grittmann - Saarland University&lt;/p&gt;&lt;p&gt;Thorsten Grosch - TU Clausthal&lt;/p&gt;&lt;p&gt;Adrien Gruson - École de Technologie Supérieure&lt;/p&gt;&lt;p&gt;Jie Guo - Nanjing University&lt;/p&gt;&lt;p&gt;Toshiya Hachisuka - University of Waterloo&lt;/p&gt;&lt;p&gt;David Hahn - TU Wien&lt;/p&gt;&lt;p&gt;Johannes Hanika - Karlsruhe Institute of Technology&lt;/p&gt;&lt;p&gt;Milos Hasan - Adobe Research&lt;/p&gt;&lt;p&gt;Sebastian Herholz - Intel Corporation&lt;/p&gt;&lt;p&gt;Nicolas Holzschuch - INRIA&lt;/p&gt;&lt;p&gt;Tomáš Iser - Charles University&lt;/p&gt;&lt;p&gt;Julian Iseringhausen - Google Research&lt;/p&gt;&lt;p&gt;Wojciech Jarosz - Dartmouth College&lt;/p&gt;&lt;p&gt;Alisa Jung - IVD / Karlsruhe Institute of Technology&lt;/p&gt;&lt;p&gt;Markus Kettunen - NVIDIA&lt;/p&gt;&lt;p&gt;Manuel Lagunas - Amazon&lt;/p&gt;&lt;p&gt;Sungkil Lee - Sungkyunkwan University&lt;/p&gt;&lt;p&gt;Tzu-Mao Li - UC San Diego&lt;/p&gt;&lt;p&gt;Daqi Lin - NVIDIA&lt;/p&gt;&lt;p&gt;Jorge Lopez-Moreno - Universidad Rey Juan Carlos&lt;/p&gt;&lt;p&gt;Steve Marschner - Cornell University&lt;/p&gt;&lt;p&gt;Daniel Martin - Universidad de Zaragoza&lt;/p&gt;&lt;p&gt;Bochang Moon - Gwangju Institute of Science and Technology&lt;/p&gt;&lt;p&gt;Krishna Mullia - Adobe Research&lt;/p&gt;&lt;p&gt;Jacob Munkberg - NVIDIA Corporation&lt;/p&gt;&lt;p&gt;Merlin Nimier-David - NVIDIA&lt;/p&gt;&lt;p&gt;Emilie Nogue - Imperial College London&lt;/p&gt;&lt;p&gt;Jan Novak - NVIDIA&lt;/p&gt;&lt;p&gt;Pieter Peers - College of William &amp; Mary&lt;/p&gt;&lt;p&gt;Christoph Peters - TU Delft&lt;/p&gt;&lt;p&gt;Matt Pharr - NVIDIA&lt;/p&gt;&lt;p&gt;Julien Philip - Netflix Eyeline Studios&lt;/p&gt;&lt;p&gt;Alina Pranovich - Technical University of Denmark&lt;/p&gt;&lt;p&gt;Marco Salvi - NVIDIA&lt;/p&gt;&lt;p&gt;Nicolas Savva - Cornell University&lt;/p&gt;&lt;p&gt;Gurprit Singh - Max-Planck Institute for Informatics, Saarbrücken&lt;/p&gt;&lt;p&gt;Shlomi Steinberg - University of California Santa Barbara&lt;/p&gt;&lt;p&gt;Daniel Sýkora - CTU in Prague, FEE&lt;/p&gt;&lt;p&gt;Natalya Tatarchuk - Activision / Microsoft&lt;/p&gt;&lt;p&gt;Konstantinos Vardis - Huawei Technologies&lt;/p&gt;&lt;p&gt;Delio Vicini - Google&lt;/p&gt;&lt;p&gt;Jiří Vorba - Weta Digital&lt;/p&gt;&lt;p&gt;Rui Wang - Zhejiang University&lt;/p&gt;&lt;p&gt;Li-Yi Wei - Adobe Research&lt;/p&gt;&lt;p&gt;Tien-Tsin Wong - Monash University&lt;/p&gt;&lt;p&gt;Hongzhi Wu - Zhejiang University&lt;/p&gt;&lt;p&gt;KUI Wu - LightSpeed Studios&lt;/p&gt;&lt;p&gt;Lifan Wu - NVIDIA&lt;/p&gt;&lt;p&gt;Mengqi Xia - Yale University&lt;/p&gt;&lt;p&gt;Kun Xu - Tsinghua University&lt;/p&gt;&lt;p&gt;Kai Yan - University of California Irvine&lt;/p&gt;&lt;p&gt;Ling-Qi Yan - UC Santa Barbara&lt;/p&gt;&lt;p&gt;Huo Yuchi - Zhejiang University&lt;/p&gt;&lt;p&gt;Cem Yuksel - University of Utah&lt;/p&gt;&lt;p&gt;Tizian Zeltner - NVIDIA&lt;/p&gt;&lt;p&gt;Shuang Zhao - University of ","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":"i-x"},"PeriodicalIF":2.9,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70165","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MatSwap: Light-aware material transfers in images MatSwap:在图像中传输光感知材料
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70168
I. Lopes, V. Deschaintre, Y. Hold-Geoffroy, R. de Charette

We present MatSwap, a method to transfer materials to designated surfaces in an image realistically. Such a task is non-trivial due to the large entanglement of material appearance, geometry, and lighting in a photograph. In the literature, material editing methods typically rely on either cumbersome text engineering or extensive manual annotations requiring artist knowledge and 3D scene properties that are impractical to obtain. In contrast, we propose to directly learn the relationship between the input material—as observed on a flat surface—and its appearance within the scene, without the need for explicit UV mapping. To achieve this, we rely on a custom light- and geometry-aware diffusion model. We fine-tune a large-scale pre-trained text-to-image model for material transfer using our synthetic dataset, preserving its strong priors to ensure effective generalization to real images. As a result, our method seamlessly integrates a desired material into the target location in the photograph while retaining the identity of the scene. MatSwap is evaluated on synthetic and real images showing that it compares favorably to recent works. Our code and data are made publicly available on https://github.com/astra-vision/MatSwap

我们提出了MatSwap,一种在图像中逼真地将材料转移到指定表面的方法。由于照片中的材料外观、几何形状和光线的大量纠缠,这样的任务不是微不足道的。在文献中,材料编辑方法通常依赖于繁琐的文本工程或需要艺术家知识和不切实际的3D场景属性的大量手动注释。相反,我们建议直接学习输入材料(在平面上观察到的)与其在场景中的外观之间的关系,而不需要显式的UV映射。为了实现这一点,我们依赖于自定义的光和几何感知扩散模型。我们使用我们的合成数据集微调大规模预训练的文本到图像模型,用于材料传输,保留其强先验,以确保有效地推广到真实图像。因此,我们的方法无缝地将所需的材料集成到照片中的目标位置,同时保留场景的身份。MatSwap在合成和真实图像上进行了评估,显示它比最近的作品更有优势。我们的代码和数据在https://github.com/astra-vision/MatSwap上公开发布
{"title":"MatSwap: Light-aware material transfers in images","authors":"I. Lopes,&nbsp;V. Deschaintre,&nbsp;Y. Hold-Geoffroy,&nbsp;R. de Charette","doi":"10.1111/cgf.70168","DOIUrl":"https://doi.org/10.1111/cgf.70168","url":null,"abstract":"<p>We present MatSwap, a method to transfer materials to designated surfaces in an image realistically. Such a task is non-trivial due to the large entanglement of material appearance, geometry, and lighting in a photograph. In the literature, material editing methods typically rely on either cumbersome text engineering or extensive manual annotations requiring artist knowledge and 3D scene properties that are impractical to obtain. In contrast, we propose to directly learn the relationship between the input material—as observed on a flat surface—and its appearance within the scene, without the need for explicit UV mapping. To achieve this, we rely on a custom light- and geometry-aware diffusion model. We fine-tune a large-scale pre-trained text-to-image model for material transfer using our synthetic dataset, preserving its strong priors to ensure effective generalization to real images. As a result, our method seamlessly integrates a desired material into the target location in the photograph while retaining the identity of the scene. MatSwap is evaluated on synthetic and real images showing that it compares favorably to recent works. Our code and data are made publicly available on https://github.com/astra-vision/MatSwap</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VideoMat: Extracting PBR Materials from Video Diffusion Models VideoMat:从视频扩散模型中提取PBR材料
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70180
J. Munkberg, Z. Wang, R. Liang, T. Shen, J. Hasselgren

We leverage finetuned video diffusion models, intrinsic decomposition of videos, and physically-based differentiable rendering to generate high quality materials for 3D models given a text prompt or a single image. We condition a video diffusion model to respect the input geometry and lighting condition. This model produces multiple views of a given 3D model with coherent material properties. Secondly, we use a recent model to extract intrinsics (base color, roughness, metallic) from the generated video. Finally, we use the intrinsics alongside the generated video in a differentiable path tracer to robustly extract PBR materials directly compatible with common content creation tools.

我们利用微调的视频扩散模型,视频的内在分解和基于物理的可微分渲染,为给定文本提示或单个图像的3D模型生成高质量的材料。我们调整视频扩散模型以尊重输入几何和光照条件。该模型产生具有一致材料属性的给定3D模型的多个视图。其次,我们使用最新的模型从生成的视频中提取本质(基色,粗糙度,金属)。最后,我们在一个可微分路径跟踪器中使用这些特性和生成的视频一起,鲁棒地提取与通用内容创建工具直接兼容的PBR材料。
{"title":"VideoMat: Extracting PBR Materials from Video Diffusion Models","authors":"J. Munkberg,&nbsp;Z. Wang,&nbsp;R. Liang,&nbsp;T. Shen,&nbsp;J. Hasselgren","doi":"10.1111/cgf.70180","DOIUrl":"https://doi.org/10.1111/cgf.70180","url":null,"abstract":"<p>We leverage finetuned video diffusion models, intrinsic decomposition of videos, and physically-based differentiable rendering to generate high quality materials for 3D models given a text prompt or a single image. We condition a video diffusion model to respect the input geometry and lighting condition. This model produces multiple views of a given 3D model with coherent material properties. Secondly, we use a recent model to extract intrinsics (base color, roughness, metallic) from the generated video. Finally, we use the intrinsics alongside the generated video in a differentiable path tracer to robustly extract PBR materials directly compatible with common content creation tools.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detail-Preserving Real-Time Hair Strand Linking and Filtering 细节保存实时头发链连接和过滤
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70176
T. Huang, J. Yuan, R. Hu, L. Wang, Y. Guo, B. Chen, J. Guo, J. Zhu

Realistic hair rendering remains a significant challenge in computer graphics due to the intricate microstructure of hair fibers and their anisotropic scattering properties, which make them highly sensitive to noise. Although recent advancements in image-space and 3D-space denoising and antialiasing techniques have facilitated real-time rendering in simple scenes, existing methods still struggle with excessive blurring and artifacts, particularly in fine hair details such as flyaway strands. These issues arise because current techniques often fail to preserve sub-pixel continuity and lack directional sensitivity in the filtering process. To address these limitations, we introduce a novel real-time hair filtering technique that effectively reconstructs fine fiber details while suppressing noise. Our method improves visual quality by maintaining strand-level details and ensuring computational efficiency, making it well-suited for real-time applications in video games and virtual reality (VR) and augmented reality (AR) environments.

由于头发纤维复杂的微观结构和对噪声高度敏感的各向异性散射特性,头发的逼真渲染一直是计算机图形学中的一个重大挑战。尽管最近在图像空间和3d空间去噪和抗混叠技术方面的进步促进了简单场景的实时渲染,但现有的方法仍然与过度模糊和伪影作斗争,特别是在精细的头发细节中,如飘散的发丝。这些问题的出现是因为目前的技术往往不能保持亚像素的连续性,并且在滤波过程中缺乏方向灵敏度。为了解决这些限制,我们引入了一种新的实时头发过滤技术,可以有效地重建细纤维细节,同时抑制噪声。我们的方法通过保持线级细节和确保计算效率来提高视觉质量,使其非常适合视频游戏和虚拟现实(VR)和增强现实(AR)环境中的实时应用。
{"title":"Detail-Preserving Real-Time Hair Strand Linking and Filtering","authors":"T. Huang,&nbsp;J. Yuan,&nbsp;R. Hu,&nbsp;L. Wang,&nbsp;Y. Guo,&nbsp;B. Chen,&nbsp;J. Guo,&nbsp;J. Zhu","doi":"10.1111/cgf.70176","DOIUrl":"https://doi.org/10.1111/cgf.70176","url":null,"abstract":"<p>Realistic hair rendering remains a significant challenge in computer graphics due to the intricate microstructure of hair fibers and their anisotropic scattering properties, which make them highly sensitive to noise. Although recent advancements in image-space and 3D-space denoising and antialiasing techniques have facilitated real-time rendering in simple scenes, existing methods still struggle with excessive blurring and artifacts, particularly in fine hair details such as flyaway strands. These issues arise because current techniques often fail to preserve sub-pixel continuity and lack directional sensitivity in the filtering process. To address these limitations, we introduce a novel real-time hair filtering technique that effectively reconstructs fine fiber details while suppressing noise. Our method improves visual quality by maintaining strand-level details and ensuring computational efficiency, making it well-suited for real-time applications in video games and virtual reality (VR) and augmented reality (AR) environments.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPaGS: Fast and Accurate 3D Gaussian Splatting for Spherical Panoramas SPaGS:快速和准确的三维高斯溅射球面全景
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70171
J. Li, F. Hahlbohm, T. Scholz, M. Eisemann, J.P. Tauscher, M. Magnor

In this paper we propose SPaGS, a high-quality, real-time free-viewpoint rendering approach from 360-degree panoramic images. While existing methods building on Neural Radiance Fields or 3D Gaussian Splatting have difficulties to achieve real-time frame rates and high-quality results at the same time, SPaGS combines the advantages of an explicit 3D Gaussian-based scene representation and ray casting-based rendering to attain fast and accurate results. Central to our new approach is the exact calculation of axis-aligned bounding boxes for spherical images that significantly accelerates omnidirectional ray casting of 3D Gaussians. We also present a new dataset consisting of ten real-world scenes recorded with a drone that incorporates both calibrated 360-degree panoramic images as well as perspective images captured simultaneously, i.e., with the same flight trajectory. Our evaluation on this new dataset as well as established benchmarks demonstrates that SPaGS excels over state-of-the-art methods in terms of both rendering quality and speed.

在本文中,我们提出了SPaGS,一种高质量、实时的360度全景图像自由视点渲染方法。虽然现有的基于Neural Radiance Fields或3D Gaussian Splatting的方法难以同时实现实时帧率和高质量的结果,但SPaGS结合了明确的基于3D高斯的场景表示和基于光线投射的渲染的优点,可以获得快速准确的结果。我们新方法的核心是精确计算球面图像的轴向边界框,这大大加速了3D高斯图像的全向光线投射。我们还提出了一个新的数据集,由无人机记录的十个真实世界场景组成,其中包含校准的360度全景图像以及同时捕获的透视图像,即具有相同的飞行轨迹。我们对这个新数据集以及已建立的基准测试的评估表明,SPaGS在渲染质量和速度方面优于最先进的方法。
{"title":"SPaGS: Fast and Accurate 3D Gaussian Splatting for Spherical Panoramas","authors":"J. Li,&nbsp;F. Hahlbohm,&nbsp;T. Scholz,&nbsp;M. Eisemann,&nbsp;J.P. Tauscher,&nbsp;M. Magnor","doi":"10.1111/cgf.70171","DOIUrl":"https://doi.org/10.1111/cgf.70171","url":null,"abstract":"<div>\u0000 <p>In this paper we propose SPaGS, a high-quality, real-time free-viewpoint rendering approach from 360-degree panoramic images. While existing methods building on Neural Radiance Fields or 3D Gaussian Splatting have difficulties to achieve real-time frame rates and high-quality results at the same time, SPaGS combines the advantages of an explicit 3D Gaussian-based scene representation and ray casting-based rendering to attain fast and accurate results. Central to our new approach is the exact calculation of axis-aligned bounding boxes for spherical images that significantly accelerates omnidirectional ray casting of 3D Gaussians. We also present a new dataset consisting of ten real-world scenes recorded with a drone that incorporates both calibrated 360-degree panoramic images as well as perspective images captured simultaneously, i.e., with the same flight trajectory. Our evaluation on this new dataset as well as established benchmarks demonstrates that SPaGS excels over state-of-the-art methods in terms of both rendering quality and speed.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70171","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiffNEG: A Differentiable Rasterization Framework for Online Aiming Optimization in Solar Power Tower Systems DiffNEG:一种用于太阳能发电塔系统在线瞄准优化的可微光栅化框架
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70166
Cangping Zheng, Xiaoxia Lin, Dongshuai Li, Yuhong Zhao, Jieqing Feng

Inverse rendering aims to infer scene parameters from observed images. In Solar Power Tower (SPT) systems, this corresponds to an aiming optimization problem—adjusting heliostats' orientations to shape the radiative flux density distribution (RFDD) on the receiver to conform to a desired distribution. The SPT system is widely favored in the field of renewable energy, where aiming optimization is crucial for ensuring its thermal efficiency and safety. However, traditional aiming optimization methods are inefficient and fail to meet online demands. In this paper, a novel optimization approach, DiffNEG, is proposed. DiffNEG introduces a differentiable rasterization method to model the reflected radiative flux of each heliostat as an elliptical Gaussian distribution. It leverages data-driven techniques to enhance simulation accuracy and employs automatic differentiation combined with gradient descent to achieve online, gradient-guided optimization in a continuous solution space. Experiments on a real large-scale heliostat field with nearly 30,000 heliostats demonstrate that DiffNEG can optimize within 10 seconds, improving efficiency by one order of magnitude compared to the latest DiffMCRT method and by three orders of magnitude compared to traditional heuristic methods, while also exhibiting superior robustness under both steady and transient state.

反向渲染的目的是从观测到的图像中推断出场景参数。在太阳能发电塔(SPT)系统中,这对应于一个瞄准优化问题——调整定日镜的方向以使接收器上的辐射通量密度分布(RFDD)符合期望的分布。SPT系统在可再生能源领域受到广泛青睐,目标优化是保证其热效率和安全性的关键。然而,传统的瞄准优化方法效率低下,不能满足在线需求。本文提出了一种新的优化方法DiffNEG。DiffNEG引入了一种可微光栅化方法,将每个定日镜的反射辐射通量建模为椭圆高斯分布。它利用数据驱动技术来提高模拟精度,并将自动微分与梯度下降相结合,在连续的解决方案空间中实现在线、梯度引导的优化。在近3万个定日镜的实际大型定日镜场上进行的实验表明,DiffNEG算法可以在10秒内实现优化,比最新的DiffMCRT方法效率提高了一个数量级,比传统的启发式方法效率提高了三个数量级,同时在稳态和瞬态下都表现出优异的鲁棒性。
{"title":"DiffNEG: A Differentiable Rasterization Framework for Online Aiming Optimization in Solar Power Tower Systems","authors":"Cangping Zheng,&nbsp;Xiaoxia Lin,&nbsp;Dongshuai Li,&nbsp;Yuhong Zhao,&nbsp;Jieqing Feng","doi":"10.1111/cgf.70166","DOIUrl":"https://doi.org/10.1111/cgf.70166","url":null,"abstract":"<p>Inverse rendering aims to infer scene parameters from observed images. In Solar Power Tower (SPT) systems, this corresponds to an aiming optimization problem—adjusting heliostats' orientations to shape the radiative flux density distribution (RFDD) on the receiver to conform to a desired distribution. The SPT system is widely favored in the field of renewable energy, where aiming optimization is crucial for ensuring its thermal efficiency and safety. However, traditional aiming optimization methods are inefficient and fail to meet online demands. In this paper, a novel optimization approach, DiffNEG, is proposed. DiffNEG introduces a differentiable rasterization method to model the reflected radiative flux of each heliostat as an elliptical Gaussian distribution. It leverages data-driven techniques to enhance simulation accuracy and employs automatic differentiation combined with gradient descent to achieve online, gradient-guided optimization in a continuous solution space. Experiments on a real large-scale heliostat field with nearly 30,000 heliostats demonstrate that DiffNEG can optimize within 10 seconds, improving efficiency by one order of magnitude compared to the latest DiffMCRT method and by three orders of magnitude compared to traditional heuristic methods, while also exhibiting superior robustness under both steady and transient state.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceived quality of BRDF models BRDF模型的感知质量
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70162
Behnaz Kavoosighafi, Rafał K. Mantiuk, Saghi Hajisharif, Ehsan Miandji, Jonas Unger

Material appearance is commonly modeled with the Bidirectional Reflectance Distribution Functions (BRDFs), which need to trade accuracy for complexity and storage cost. To investigate the current practices of BRDF modeling, we collect the first high dynamic range stereoscopic video dataset that captures the perceived quality degradation with respect to a number of parametric and non-parametric BRDF models. Our dataset shows that the current loss functions used to fit BRDF models, such as mean-squared error of logarithmic reflectance values, correlate poorly with the perceived quality of materials in rendered videos. We further show that quality metrics that compare rendered material samples give a significantly higher correlation with subjective quality judgments, and a simple Euclidean distance in the ITP color space (ΔEITP) shows the highest correlation. Additionally, we investigate the use of different BRDF-space metrics as loss functions for fitting BRDF models and find that logarithmic mapping is the most effective approach for BRDF-space loss functions.

材料外观通常使用双向反射分布函数(brdf)建模,这需要以复杂性和存储成本为代价来换取准确性。为了研究BRDF建模的当前实践,我们收集了第一个高动态范围立体视频数据集,该数据集捕获了相对于一些参数化和非参数化BRDF模型的感知质量退化。我们的数据集显示,目前用于拟合BRDF模型的损失函数,如对数反射率值的均方误差,与渲染视频中材料的感知质量相关性很差。我们进一步表明,比较渲染材料样本的质量指标与主观质量判断具有显著更高的相关性,并且ITP颜色空间(ΔEITP)中的简单欧几里得距离显示出最高的相关性。此外,我们研究了使用不同的BRDF空间度量作为损失函数来拟合BRDF模型,并发现对数映射是BRDF空间损失函数最有效的方法。
{"title":"Perceived quality of BRDF models","authors":"Behnaz Kavoosighafi,&nbsp;Rafał K. Mantiuk,&nbsp;Saghi Hajisharif,&nbsp;Ehsan Miandji,&nbsp;Jonas Unger","doi":"10.1111/cgf.70162","DOIUrl":"https://doi.org/10.1111/cgf.70162","url":null,"abstract":"<div>\u0000 <p>Material appearance is commonly modeled with the Bidirectional Reflectance Distribution Functions (BRDFs), which need to trade accuracy for complexity and storage cost. To investigate the current practices of BRDF modeling, we collect the first high dynamic range stereoscopic video dataset that captures the perceived quality degradation with respect to a number of parametric and non-parametric BRDF models. Our dataset shows that the current loss functions used to fit BRDF models, such as mean-squared error of logarithmic reflectance values, correlate poorly with the perceived quality of materials in rendered videos. We further show that quality metrics that compare rendered material samples give a significantly higher correlation with subjective quality judgments, and a simple Euclidean distance in the ITP color space (ΔE<sub>ITP</sub>) shows the highest correlation. Additionally, we investigate the use of different BRDF-space metrics as loss functions for fitting BRDF models and find that logarithmic mapping is the most effective approach for BRDF-space loss functions.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70162","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Level-of-detail Strand-based Rendering 基于线的实时细节级渲染
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70181
T. Huang, Y. Zhou, D. Lin, J. Zhu, L. Yan, K. Wu

We present a real-time strand-based rendering framework that ensures seamless transitions between different level-of-detail (LoD) while maintaining a consistent appearance. We first introduce an aggregated BCSDF model to accurately capture both single and multiple scattering within the cluster for hairs and fibers. Building upon this, we further introduce a LoD framework for hair rendering that dynamically, adaptively, and independently replaces clusters of individual hairs with thick strands based on their projected screen widths. Through tests on diverse hairstyles with various hair colors and animation, as well as knit patches, our framework closely replicates the appearance of multiple-scattered full geometries at various viewing distances, achieving up to a 13× speedup.

我们提出了一个实时的基于链的渲染框架,确保不同细节级别(LoD)之间的无缝过渡,同时保持一致的外观。我们首先引入了一个聚合BCSDF模型来准确地捕获毛发和纤维簇内的单次和多次散射。在此基础上,我们进一步引入了一个用于头发渲染的LoD框架,该框架可以动态地、自适应地、独立地根据投影屏幕宽度替换带有粗缕的单个头发簇。通过对不同发型、不同发色和动画以及编织补丁的测试,我们的框架在不同的观看距离下紧密地复制了多个分散的完整几何形状的外观,实现了高达13倍的加速。
{"title":"Real-time Level-of-detail Strand-based Rendering","authors":"T. Huang,&nbsp;Y. Zhou,&nbsp;D. Lin,&nbsp;J. Zhu,&nbsp;L. Yan,&nbsp;K. Wu","doi":"10.1111/cgf.70181","DOIUrl":"https://doi.org/10.1111/cgf.70181","url":null,"abstract":"<p>We present a real-time strand-based rendering framework that ensures seamless transitions between different level-of-detail (LoD) while maintaining a consistent appearance. We first introduce an aggregated BCSDF model to accurately capture both single and multiple scattering within the cluster for hairs and fibers. Building upon this, we further introduce a LoD framework for hair rendering that dynamically, adaptively, and independently replaces clusters of individual hairs with thick strands based on their projected screen widths. Through tests on diverse hairstyles with various hair colors and animation, as well as knit patches, our framework closely replicates the appearance of multiple-scattered full geometries at various viewing distances, achieving up to a <i>13×</i> speedup.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artist-Inator: Text-based, Gloss-aware Non-photorealistic Stylization 艺术家- inator:基于文本的,有光泽的非真实感风格化
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70182
J. Daniel Subias, Saul Daniel-Soriano, Diego Gutierrez, Ana Serrano

Large diffusion models have made a remarkable leap synthesizing high-quality artistic images from text descriptions. However, these powerful pre-trained models still lack control to guide key material appearance properties, such as gloss. In this work, we present a threefold contribution: (1) we analyze how gloss is perceived across different artistic styles (i.e., oil painting, watercolor, ink pen, charcoal, and soft crayon); (2) we leverage our findings to create a dataset with 1,336,272 stylized images of many different geometries in all five styles, including automatically-computed text descriptions of their appearance (e.g., “A glossy bunny hand painted with an orange soft crayon”); and (3) we train ControlNet to condition Stable Diffusion XL synthesizing novel painterly depictions of new objects, using simple inputs such as edge maps, hand-drawn sketches, or clip arts. Compared to previous approaches, our framework yields more accurate results despite the simplified input, as we show both quantitative and qualitatively.

大型扩散模型在从文本描述合成高质量艺术图像方面取得了显著的飞跃。然而,这些强大的预训练模型仍然缺乏控制来指导关键的材料外观属性,如光泽。在这项工作中,我们提出了三个贡献:(1)我们分析了如何在不同的艺术风格(即油画,水彩画,水墨笔,木炭和软蜡笔)中感知光泽;(2)我们利用我们的发现创建了一个包含所有五种风格的1,336,272个不同几何形状的风式化图像的数据集,包括对其外观的自动计算文本描述(例如,“用橙色软蜡笔手绘的光滑兔子”);(3)我们训练ControlNet来调节稳定扩散XL合成新对象的新颖绘画描绘,使用简单的输入,如边缘地图,手绘草图或剪贴画。与以前的方法相比,尽管简化了输入,但我们的框架产生了更准确的结果,因为我们同时显示了定量和定性。
{"title":"Artist-Inator: Text-based, Gloss-aware Non-photorealistic Stylization","authors":"J. Daniel Subias,&nbsp;Saul Daniel-Soriano,&nbsp;Diego Gutierrez,&nbsp;Ana Serrano","doi":"10.1111/cgf.70182","DOIUrl":"https://doi.org/10.1111/cgf.70182","url":null,"abstract":"<div>\u0000 <p>Large diffusion models have made a remarkable leap synthesizing high-quality artistic images from text descriptions. However, these powerful pre-trained models still lack control to guide key material appearance properties, such as gloss. In this work, we present a threefold contribution: (1) we analyze how gloss is perceived across different artistic styles (i.e., oil painting, watercolor, ink pen, charcoal, and soft crayon); (2) we leverage our findings to create a dataset with 1,336,272 stylized images of many different geometries in all five styles, including automatically-computed text descriptions of their appearance (e.g., “A glossy bunny hand painted with an orange soft crayon”); and (3) we train ControlNet to condition Stable Diffusion XL synthesizing novel painterly depictions of new objects, using simple inputs such as edge maps, hand-drawn sketches, or clip arts. Compared to previous approaches, our framework yields more accurate results despite the simplified input, as we show both quantitative and qualitatively.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70182","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StructuReiser: A Structure-preserving Video Stylization Method StructuReiser:一种保留结构的视频样式化方法
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-24 DOI: 10.1111/cgf.70161
R. Spetlik, D. Futschik, D. Sýkora

We introduce StructuReiser, a novel video-to-video translation method that transforms input videos into stylized sequences using a set of user-provided keyframes. Unlike most existing methods, StructuReiser strictly adheres to the structural elements of the target video, preserving the original identity while seamlessly applying the desired stylistic transformations. This provides a level of control and consistency that is challenging to achieve with text-driven or keyframe-based approaches, including large video models. Furthermore, StructuReiser supports real-time inference on standard graphics hardware as well as custom keyframe editing, enabling interactive applications and expanding possibilities for creative expression and video manipulation.

我们介绍了StructuReiser,这是一种新颖的视频到视频转换方法,它使用一组用户提供的关键帧将输入视频转换为程式化序列。与大多数现有方法不同,StructuReiser严格遵循目标视频的结构元素,在无缝应用所需的风格转换的同时保留原始身份。这提供了一定程度的控制和一致性,这是文本驱动或基于关键帧的方法(包括大型视频模型)难以实现的。此外,StructuReiser支持对标准图形硬件的实时推断以及自定义关键帧编辑,支持交互式应用程序并扩展创意表达和视频操作的可能性。
{"title":"StructuReiser: A Structure-preserving Video Stylization Method","authors":"R. Spetlik,&nbsp;D. Futschik,&nbsp;D. Sýkora","doi":"10.1111/cgf.70161","DOIUrl":"https://doi.org/10.1111/cgf.70161","url":null,"abstract":"<div>\u0000 <p>We introduce StructuReiser, a novel video-to-video translation method that transforms input videos into stylized sequences using a set of user-provided keyframes. Unlike most existing methods, StructuReiser strictly adheres to the structural elements of the target video, preserving the original identity while seamlessly applying the desired stylistic transformations. This provides a level of control and consistency that is challenging to achieve with text-driven or keyframe-based approaches, including large video models. Furthermore, StructuReiser supports real-time inference on standard graphics hardware as well as custom keyframe editing, enabling interactive applications and expanding possibilities for creative expression and video manipulation.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1