首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
Reconstructing Curves from Sparse Samples on Riemannian Manifolds 从稀疏样本重构黎曼曼曲面上的曲线
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-31 DOI: 10.1111/cgf.15136
D. Marin, F. Maggioli, S. Melzi, S. Ohrhallinger, M. Wimmer

Reconstructing 2D curves from sample points has long been a critical challenge in computer graphics, finding essential applications in vector graphics. The design and editing of curves on surfaces has only recently begun to receive attention, primarily relying on human assistance, and where not, limited by very strict sampling conditions. In this work, we formally improve on the state-of-the-art requirements and introduce an innovative algorithm capable of reconstructing closed curves directly on surfaces from a given sparse set of sample points. We extend and adapt a state-of-the-art planar curve reconstruction method to the realm of surfaces while dealing with the challenges arising from working on non-Euclidean domains. We demonstrate the robustness of our method by reconstructing multiple curves on various surface meshes. We explore novel potential applications of our approach, allowing for automated reconstruction of curves on Riemannian manifolds.

长期以来,从采样点重建二维曲线一直是计算机图形学中的一项重要挑战,在矢量图形学中有着不可或缺的应用。曲面上的曲线设计和编辑最近才开始受到关注,主要依赖于人工辅助,如果没有人工辅助,则受到非常严格的采样条件的限制。在这项工作中,我们正式改进了最先进的要求,并引入了一种创新算法,能够从给定的稀疏采样点集合直接重建曲面上的闭合曲线。我们将最先进的平面曲线重建方法扩展并调整到曲面领域,同时应对在非欧几里得域上工作所带来的挑战。我们通过在各种曲面网格上重建多条曲线,证明了我们方法的鲁棒性。我们探索了我们的方法的新的潜在应用,允许在黎曼流形上自动重建曲线。
{"title":"Reconstructing Curves from Sparse Samples on Riemannian Manifolds","authors":"D. Marin,&nbsp;F. Maggioli,&nbsp;S. Melzi,&nbsp;S. Ohrhallinger,&nbsp;M. Wimmer","doi":"10.1111/cgf.15136","DOIUrl":"10.1111/cgf.15136","url":null,"abstract":"<div>\u0000 \u0000 <p>Reconstructing 2D curves from sample points has long been a critical challenge in computer graphics, finding essential applications in vector graphics. The design and editing of curves on surfaces has only recently begun to receive attention, primarily relying on human assistance, and where not, limited by very strict sampling conditions. In this work, we formally improve on the state-of-the-art requirements and introduce an innovative algorithm capable of reconstructing closed curves directly on surfaces from a given sparse set of sample points. We extend and adapt a state-of-the-art planar curve reconstruction method to the realm of surfaces while dealing with the challenges arising from working on non-Euclidean domains. We demonstrate the robustness of our method by reconstructing multiple curves on various surface meshes. We explore novel potential applications of our approach, allowing for automated reconstruction of curves on Riemannian manifolds.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15136","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Dual-Volumes for Tetrahedral Meshes 四面体网格的优化双体积
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-31 DOI: 10.1111/cgf.15133
Alec Jacobson

Constructing well-behaved Laplacian and mass matrices is essential for tetrahedral mesh processing. Unfortunately, the de facto standard linear finite elements exhibit bias on tetrahedralized regular grids, motivating the development of finite-volume methods. In this paper, we place existing methods into a common construction, showing how their differences amount to the choice of simplex centers. These choices lead to satisfaction or breakdown of important properties: continuity with respect to vertex positions, positive semi-definiteness of the implied Dirichlet energy, positivity of the mass matrix, and unbiased-ness on regular grids. Based on this analysis, we propose a new method for constructing dual-volumes which explicitly satisfy all of these properties via convex optimization.

构建良好的拉普拉斯矩阵和质量矩阵对四面体网格处理至关重要。遗憾的是,事实上的标准线性有限元在四面体正则网格上表现出偏差,这促使了有限体积方法的发展。在本文中,我们将现有的方法归入一个共同的结构中,展示了它们之间的差异是如何体现在单纯形中心的选择上的。这些选择会导致重要性质的满足或破坏:顶点位置的连续性、隐含 Dirichlet 能量的正半定义性、质量矩阵的正性以及规则网格上的无偏性。基于上述分析,我们提出了一种新方法,通过凸优化构建明确满足所有这些属性的对偶体积。
{"title":"Optimized Dual-Volumes for Tetrahedral Meshes","authors":"Alec Jacobson","doi":"10.1111/cgf.15133","DOIUrl":"10.1111/cgf.15133","url":null,"abstract":"<div>\u0000 \u0000 <p>Constructing well-behaved Laplacian and mass matrices is essential for tetrahedral mesh processing. Unfortunately, the <i>de facto</i> standard linear finite elements exhibit bias on tetrahedralized regular grids, motivating the development of finite-volume methods. In this paper, we place existing methods into a common construction, showing how their differences amount to the choice of simplex centers. These choices lead to satisfaction or breakdown of important properties: continuity with respect to vertex positions, positive semi-definiteness of the implied Dirichlet energy, positivity of the mass matrix, and unbiased-ness on regular grids. Based on this analysis, we propose a new method for constructing dual-volumes which explicitly satisfy all of these properties via convex optimization.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15133","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coverage Axis++: Efficient Inner Point Selection for 3D Shape Skeletonization 覆盖轴++:三维形状骨架化的高效内点选择
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-31 DOI: 10.1111/cgf.15143
Zimeng Wang, Zhiyang Dou, Rui Xu, Cheng Lin, Yuan Liu, Xiaoxiao Long, Shiqing Xin, Taku Komura, Xiaoming Yuan, Wenping Wang

We introduce Coverage Axis++, a novel and efficient approach to 3D shape skeletonization. The current state-of-the-art approaches for this task often rely on the watertightness of the input [LWS*15; PWG*19; PWG*19] or suffer from substantial computational costs [DLX*22; CD23], thereby limiting their practicality. To address this challenge, Coverage Axis++ proposes a heuristic algorithm to select skeletal points, offering a high-accuracy approximation of the Medial Axis Transform (MAT) while significantly mitigating computational intensity for various shape representations. We introduce a simple yet effective strategy that considers shape coverage, uniformity, and centrality to derive skeletal points. The selection procedure enforces consistency with the shape structure while favoring the dominant medial balls, which thus introduces a compact underlying shape representation in terms of MAT. As a result, Coverage Axis++ allows for skeletonization for various shape representations (e.g., water-tight meshes, triangle soups, point clouds), specification of the number of skeletal points, few hyperparameters, and highly efficient computation with improved reconstruction accuracy. Extensive experiments across a wide range of 3D shapes validate the efficiency and effectiveness of Coverage Axis++. Our codes are available at https://github.com/Frank-ZY-Dou/Coverage_Axis.

我们介绍了 Coverage Axis++,这是一种新颖高效的三维形状骨架化方法。目前用于这项任务的最先进方法通常依赖于输入[LWS*15;PWG*19;PWG*19]的无懈可击性,或受制于巨大的计算成本[DLX*22;CD23],从而限制了其实用性。为了应对这一挑战,Coverage Axis++ 提出了一种选择骨骼点的启发式算法,在提供中轴变换 (MAT) 的高精度近似值的同时,显著降低了各种形状表示的计算强度。我们引入了一种简单而有效的策略,该策略考虑了形状的覆盖范围、均匀性和中心性,从而得出骨骼点。这种选择程序既能确保与形状结构保持一致,又能偏向于占优势的中轴球,从而在 MAT 方面引入了一种紧凑的底层形状表示法。因此,Coverage Axis++ 可以对各种形状表示(如水密网格、三角形汤、点云)进行骨架化,指定骨架点的数量,超参数少,计算效率高,重建精度高。在广泛的三维形状中进行的大量实验验证了 Coverage Axis++ 的效率和有效性。我们的代码见 https://github.com/Frank-ZY-Dou/Coverage_Axis。
{"title":"Coverage Axis++: Efficient Inner Point Selection for 3D Shape Skeletonization","authors":"Zimeng Wang,&nbsp;Zhiyang Dou,&nbsp;Rui Xu,&nbsp;Cheng Lin,&nbsp;Yuan Liu,&nbsp;Xiaoxiao Long,&nbsp;Shiqing Xin,&nbsp;Taku Komura,&nbsp;Xiaoming Yuan,&nbsp;Wenping Wang","doi":"10.1111/cgf.15143","DOIUrl":"10.1111/cgf.15143","url":null,"abstract":"<div>\u0000 \u0000 <p>We introduce Coverage Axis++, a novel and efficient approach to 3D shape skeletonization. The current state-of-the-art approaches for this task often rely on the watertightness of the input [LWS*15; PWG*19; PWG*19] or suffer from substantial computational costs [DLX*22; CD23], thereby limiting their practicality. To address this challenge, Coverage Axis++ proposes a heuristic algorithm to select skeletal points, offering a high-accuracy approximation of the Medial Axis Transform (MAT) while significantly mitigating computational intensity for various shape representations. We introduce a simple yet effective strategy that considers shape coverage, uniformity, and centrality to derive skeletal points. The selection procedure enforces consistency with the shape structure while favoring the dominant medial balls, which thus introduces a compact underlying shape representation in terms of MAT. As a result, Coverage Axis++ allows for skeletonization for various shape representations (e.g., water-tight meshes, triangle soups, point clouds), specification of the number of skeletal points, few hyperparameters, and highly efficient computation with improved reconstruction accuracy. Extensive experiments across a wide range of 3D shapes validate the efficiency and effectiveness of Coverage Axis++. Our codes are available at https://github.com/Frank-ZY-Dou/Coverage_Axis.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15143","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
1-Lipschitz Neural Distance Fields 1-Lipschitz 神经距离场
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-31 DOI: 10.1111/cgf.15128
Guillaume Coiffier, Louis Béthune

Neural implicit surfaces are a promising tool for geometry processing that represent a solid object as the zero level set of a neural network. Usually trained to approximate a signed distance function of the considered object, these methods exhibit great visual fidelity and quality near the surface, yet their properties tend to degrade with distance, making geometrical queries hard to perform without the help of complex range analysis techniques. Based on recent advancements in Lipschitz neural networks, we introduce a new method for approximating the signed distance function of a given object. As our neural function is made 1-Lipschitz by construction, it cannot overestimate the distance, which guarantees robustness even far from the surface. Moreover, the 1-Lipschitz constraint allows us to use a different loss function, called the hinge-Kantorovitch-Rubinstein loss, which pushes the gradient as close to unit-norm as possible, thus reducing computation costs in iterative queries. As this loss function only needs a rough estimate of occupancy to be optimized, this means that the true distance function need not to be known. We are therefore able to compute neural implicit representations of even bad quality geometry such as noisy point clouds or triangle soups. We demonstrate that our methods is able to approximate the distance function of any closed or open surfaces or curves in the plane or in space, while still allowing sphere tracing or closest point projections to be performed robustly.

神经隐含曲面是一种很有前途的几何处理工具,它将实体对象表示为神经网络的零级集。这些方法通常训练成近似所考虑对象的符号距离函数,在表面附近表现出极高的视觉保真度和质量,但其特性往往会随着距离的增加而降低,因此在没有复杂范围分析技术的帮助下,很难进行几何查询。基于 Lipschitz 神经网络的最新进展,我们介绍了一种近似给定物体符号距离函数的新方法。由于我们的神经函数在构造上采用了 1-Lipschitz,因此它不会高估距离,从而保证了即使在远离表面的地方也能保持稳健。此外,1-Lipschitz 约束条件允许我们使用一种不同的损失函数,即铰链-康托洛维奇-鲁宾斯坦损失函数(hinge-Kantorovitch-Rubinstein loss),它能使梯度尽可能接近单位正态,从而降低迭代查询的计算成本。由于该损失函数只需要对占用率进行粗略估计即可优化,这意味着无需知道真正的距离函数。因此,即使是质量很差的几何图形,如嘈杂的点云或三角形汤,我们也能计算出神经隐式表示。我们证明,我们的方法能够逼近平面或空间中任何封闭或开放曲面或曲线的距离函数,同时还能稳健地进行球面追踪或最近点投影。
{"title":"1-Lipschitz Neural Distance Fields","authors":"Guillaume Coiffier,&nbsp;Louis Béthune","doi":"10.1111/cgf.15128","DOIUrl":"10.1111/cgf.15128","url":null,"abstract":"<p>Neural implicit surfaces are a promising tool for geometry processing that represent a solid object as the zero level set of a neural network. Usually trained to approximate a signed distance function of the considered object, these methods exhibit great visual fidelity and quality near the surface, yet their properties tend to degrade with distance, making geometrical queries hard to perform without the help of complex range analysis techniques. Based on recent advancements in Lipschitz neural networks, we introduce a new method for approximating the signed distance function of a given object. As our neural function is made 1-Lipschitz by construction, it cannot overestimate the distance, which guarantees robustness even far from the surface. Moreover, the 1-Lipschitz constraint allows us to use a different loss function, called the <i>hinge-Kantorovitch-Rubinstein</i> loss, which pushes the gradient as close to unit-norm as possible, thus reducing computation costs in iterative queries. As this loss function only needs a rough estimate of occupancy to be optimized, this means that the true distance function need not to be known. We are therefore able to compute neural implicit representations of even bad quality geometry such as noisy point clouds or triangle soups. We demonstrate that our methods is able to approximate the distance function of any closed or open surfaces or curves in the plane or in space, while still allowing sphere tracing or closest point projections to be performed robustly.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation in Neural Style Transfer: A Review 神经风格传递中的评价:综述
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-30 DOI: 10.1111/cgf.15165
Eleftherios Ioannou, Steve Maddock

The field of neural style transfer (NST) has witnessed remarkable progress in the past few years, with approaches being able to synthesize artistic and photorealistic images and videos of exceptional quality. To evaluate such results, a diverse landscape of evaluation methods and metrics is used, including authors' opinions based on side-by-side comparisons, human evaluation studies that quantify the subjective judgements of participants, and a multitude of quantitative computational metrics which objectively assess the different aspects of an algorithm's performance. However, there is no consensus regarding the most suitable and effective evaluation procedure that can guarantee the reliability of the results. In this review, we provide an in-depth analysis of existing evaluation techniques, identify the inconsistencies and limitations of current evaluation methods, and give recommendations for standardized evaluation practices. We believe that the development of a robust evaluation framework will not only enable more meaningful and fairer comparisons among NST methods but will also enhance the comprehension and interpretation of research findings in the field.

神经风格转换(NST)领域在过去几年中取得了显著进展,各种方法都能合成出具有艺术感和逼真度的高质量图像和视频。为了评估这些结果,人们采用了多种多样的评估方法和指标,包括基于并排比较的作者意见、量化参与者主观判断的人类评估研究,以及客观评估算法性能不同方面的大量定量计算指标。然而,对于最合适、最有效且能保证结果可靠性的评估程序,目前还没有达成共识。在这篇综述中,我们对现有的评估技术进行了深入分析,找出了当前评估方法的不一致性和局限性,并对标准化评估实践提出了建议。我们相信,建立健全的评估框架不仅能对 NST 方法进行更有意义、更公平的比较,还能加强对该领域研究成果的理解和解释。
{"title":"Evaluation in Neural Style Transfer: A Review","authors":"Eleftherios Ioannou,&nbsp;Steve Maddock","doi":"10.1111/cgf.15165","DOIUrl":"10.1111/cgf.15165","url":null,"abstract":"<p>The field of neural style transfer (NST) has witnessed remarkable progress in the past few years, with approaches being able to synthesize artistic and photorealistic images and videos of exceptional quality. To evaluate such results, a diverse landscape of evaluation methods and metrics is used, including authors' opinions based on side-by-side comparisons, human evaluation studies that quantify the subjective judgements of participants, and a multitude of quantitative computational metrics which objectively assess the different aspects of an algorithm's performance. However, there is no consensus regarding the most suitable and effective evaluation procedure that can guarantee the reliability of the results. In this review, we provide an in-depth analysis of existing evaluation techniques, identify the inconsistencies and limitations of current evaluation methods, and give recommendations for standardized evaluation practices. We believe that the development of a robust evaluation framework will not only enable more meaningful and fairer comparisons among NST methods but will also enhance the comprehension and interpretation of research findings in the field.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15165","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Front Matter 正文
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-25 DOI: 10.1111/cgf.15161

Imperial College London, South Kensington, London, UK

Program Co-Chairs

Elena Garces, Universidad Rey Juan Carlos, Spain / Adobe, France

Eric Haines, NVIDIA, US

Conference Chairs

Abhijeet Ghosh, Imperial College London, UK

Tobias Ritschel, University College London, UK

Laurent Belcour, Intel

Pierre Bénard, Bordeaux University, Inria Bordeaux-Sud-Ouest

Jiří Bittner, Czech Technical University in Prague

Tamy Boubekeur, Adobe Research

Per Christensen, Pixar

Petrik Clarberg, NVIDIA

Eugene d'Eon, NVIDIA

Daljit Singh Dhillon, Clemson University

George Drettakis, INRIA

Marc Droske, Wētā FX

Jonathan Dupuy, Intel

Farshad Einabadi, University of Surrey

Alban Fichet, Intel

Iliyan Georgiev, Adobe Research

Yotam Gingold, George Mason University

Pascal Grittman, Saarland University

Thorsten Grosch, TU Clausthal

Adrien Gruson, École de Technologie Supérieure

Tobias Günther, FAU Erlangen-Nuremberg

Milos Hasan, Adobe Research

Julian Iseringhausen, Google Research

Adrián Jarabo, Meta

Markus Kettunen, NVIDIA

Georgios Kopanas, Inria & Université Côte d'Azur

Rafael Kuffner dos Anjos, University of Leeds

Manuel Lagunas, Amazon

Thomas Leimkühler, MPI Informatik

Hendrik Lensch, University of Tübingen

Gabor Liktor, Intel

Jorge Lopez-Moreno, Universidad Rey Juan Carlos

Daniel Meister, Advanced Micro Devices, Inc.

Xiaoxu Meng, Tencent

Quirin Meyer, Coburg University

Zahra Montazeri, University of Manchester

Bochang Moon, Gwangju Institute of Science and Technology

Krishna Mullia, Adobe Research

Jacob Munkberg, NVIDIA

Thu Nguyen-Phuoc, Meta

Merlin Nimier-David, NVIDIA

Christoph Peters, Intel

Matt Pharr, NVIDIA

Julien Philip, Adobe Research

Alexander Reshetov, NVIDIA

Tobias Rittig, Additive Appearance, Charles University

Fabrice Rousselle, NVIDIA

Marco Salvi, NVIDIA

Nicolas Savva, Autodesk, Inc.

Johannes Schudeiske (Hanika), KIT

Kai Selgrad, OTH Regensburg

Ari Silvennoinen, Activision

Gurprit Singh, MPI Informatik

Erik Sintorn, Chalmers University of Technology

Peter-Pike Sloan, Activision

Cara Tursun, Rijksuniversiteit Groningen

Karthik Vaidyanathan, NVIDIA

Konstantinos Vardis, Huawei Technologies

Delio Vicini, Google

Jiří Vorba, Weta Digital

Bruce Walter, Cornell University

Li-Yi Wei, Adobe Research

Hongzhi Wu, Zhejiang University

Zexiang Xu, Adobe Research

Kai Yan, University of California Irvine

Tizian Zeltner, NVIDIA

Shuang Zhao, University of California, Irvine

Artur Grigorev, ETH Zurich

英国伦敦帝国学院,伦敦南肯辛顿会议联合主席Elena Garces,西班牙雷胡安卡洛斯大学/Adobe,法国Eric Haines,英伟达公司,美国会议主席Abhijeet Ghosh,英国伦敦帝国学院Tobias Ritschel,伦敦大学学院、英特尔Laurent BelcourPierre Bénard,波尔多大学,波尔多-南西部InriaJiří Bittner,布拉格捷克技术大学Tamy Boubekeur,Adobe ResearchPer Christensen,皮克斯Petrik Clarberg,英伟达Eugene d'Eon,英伟达ADALJIT Singh Dhillon、克莱姆森大学George Drettakis, INRIAMarc Droske, Wētā FXJonathan Dupuy, IntelFarshad Einabadi, University of SurreyAlban Fichet, IntelIliyan Georgiev, Adobe ResearchYotam Gingold, George Mason UniversityPascal Grittman, Saarland UniversityThorsten Grosch、TU ClausthalAdrien Gruson, École de Technologie SupérieureTobias Günther, FAU Erlangen-NurembergMilos Hasan, Adobe ResearchJulian Iseringhausen, Google ResearchAdrián Jarabo, MetaMarkus Kettunen, NVIDIAGeorgios Kopanas, Inria &;蔚蓝海岸大学Rafael Kuffner dos Anjos,利兹大学Manuel Lagunas,亚马逊Thomas Leimkühler,MPI InformatikHendrik Lensch,图宾根大学Gabor Liktor,英特尔Jorge Lopez-Moreno,胡安卡洛斯国王大学Daniel Meister,Advanced Micro Devices, Inc.孟晓旭,腾讯Quirin Meyer,科堡大学Zahra Montazeri,曼彻斯特大学Bochang Moon,光州科学技术学院Krishna Mullia,Adobe ResearchJacob Munkberg,NVIDIAThu Nguyen-Phuoc,MetaMerlin Nimier-David、NVIDIAChristoph Peters,英特尔Matt Pharr,英伟达Julien Philip,Adobe ResearchAlexander Reshetov,NVIDIATobias Rittig,查尔斯大学Additive AppearanceFabrice Rousselle,NVIDIAMarco Salvi,NVIDIANicolas Savva,Autodesk,Inc.Johannes Schudeiske (Hanika), KITKai Selgrad, OTH RegensburgAri Silvennoinen, ActivisionGurprit Singh, MPI InformatikErik Sintorn, Chalmers University of TechnologyPeter-Pike Sloan, ActivisionCara Tursun, Rijksuniversiteit GroningenKarthik Vaidyanathan, NVIDIAKonstantinos Vardis、华为技术公司Delio Vicini,谷歌Jiří Vorba,威塔数字公司Bruce Walter,康奈尔大学魏理益,Adobe Research吴宏志,浙江大学徐泽祥,Adobe Research严凯,加州大学欧文分校Tizian Zeltner,英伟达™ AShuang Zhao,加州大学欧文分校Artur Grigorev,苏黎世联邦理工学院
{"title":"Front Matter","authors":"","doi":"10.1111/cgf.15161","DOIUrl":"10.1111/cgf.15161","url":null,"abstract":"<p>Imperial College London, South Kensington, London, UK</p><p><b>Program Co-Chairs</b></p><p>Elena Garces, Universidad Rey Juan Carlos, Spain / Adobe, France</p><p>Eric Haines, NVIDIA, US</p><p><b>Conference Chairs</b></p><p>Abhijeet Ghosh, Imperial College London, UK</p><p>Tobias Ritschel, University College London, UK</p><p>Laurent Belcour, Intel</p><p>Pierre Bénard, Bordeaux University, Inria Bordeaux-Sud-Ouest</p><p>Jiří Bittner, Czech Technical University in Prague</p><p>Tamy Boubekeur, Adobe Research</p><p>Per Christensen, Pixar</p><p>Petrik Clarberg, NVIDIA</p><p>Eugene d'Eon, NVIDIA</p><p>Daljit Singh Dhillon, Clemson University</p><p>George Drettakis, INRIA</p><p>Marc Droske, Wētā FX</p><p>Jonathan Dupuy, Intel</p><p>Farshad Einabadi, University of Surrey</p><p>Alban Fichet, Intel</p><p>Iliyan Georgiev, Adobe Research</p><p>Yotam Gingold, George Mason University</p><p>Pascal Grittman, Saarland University</p><p>Thorsten Grosch, TU Clausthal</p><p>Adrien Gruson, École de Technologie Supérieure</p><p>Tobias Günther, FAU Erlangen-Nuremberg</p><p>Milos Hasan, Adobe Research</p><p>Julian Iseringhausen, Google Research</p><p>Adrián Jarabo, Meta</p><p>Markus Kettunen, NVIDIA</p><p>Georgios Kopanas, Inria &amp; Université Côte d'Azur</p><p>Rafael Kuffner dos Anjos, University of Leeds</p><p>Manuel Lagunas, Amazon</p><p>Thomas Leimkühler, MPI Informatik</p><p>Hendrik Lensch, University of Tübingen</p><p>Gabor Liktor, Intel</p><p>Jorge Lopez-Moreno, Universidad Rey Juan Carlos</p><p>Daniel Meister, Advanced Micro Devices, Inc.</p><p>Xiaoxu Meng, Tencent</p><p>Quirin Meyer, Coburg University</p><p>Zahra Montazeri, University of Manchester</p><p>Bochang Moon, Gwangju Institute of Science and Technology</p><p>Krishna Mullia, Adobe Research</p><p>Jacob Munkberg, NVIDIA</p><p>Thu Nguyen-Phuoc, Meta</p><p>Merlin Nimier-David, NVIDIA</p><p>Christoph Peters, Intel</p><p>Matt Pharr, NVIDIA</p><p>Julien Philip, Adobe Research</p><p>Alexander Reshetov, NVIDIA</p><p>Tobias Rittig, Additive Appearance, Charles University</p><p>Fabrice Rousselle, NVIDIA</p><p>Marco Salvi, NVIDIA</p><p>Nicolas Savva, Autodesk, Inc.</p><p>Johannes Schudeiske (Hanika), KIT</p><p>Kai Selgrad, OTH Regensburg</p><p>Ari Silvennoinen, Activision</p><p>Gurprit Singh, MPI Informatik</p><p>Erik Sintorn, Chalmers University of Technology</p><p>Peter-Pike Sloan, Activision</p><p>Cara Tursun, Rijksuniversiteit Groningen</p><p>Karthik Vaidyanathan, NVIDIA</p><p>Konstantinos Vardis, Huawei Technologies</p><p>Delio Vicini, Google</p><p>Jiří Vorba, Weta Digital</p><p>Bruce Walter, Cornell University</p><p>Li-Yi Wei, Adobe Research</p><p>Hongzhi Wu, Zhejiang University</p><p>Zexiang Xu, Adobe Research</p><p>Kai Yan, University of California Irvine</p><p>Tizian Zeltner, NVIDIA</p><p>Shuang Zhao, University of California, Irvine</p><p>Artur Grigorev, ETH Zurich</p><p>\u0000 </p><p>\u0000 </p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":"i-x"},"PeriodicalIF":2.7,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Appearance Model for Cloth Rendering 用于布料渲染的神经外观模型
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15156
G. Y. Soh, Z. Montazeri

The realistic rendering of woven and knitted fabrics has posed significant challenges throughout many years. Previously, fiber-based micro-appearance models have achieved considerable success in attaining high levels of realism. However, rendering such models remains complex due to the intricate internal scatterings of hundreds of fibers within a yarn, requiring vast amounts of memory and time to render. In this paper, we introduce a new framework to capture aggregated appearance by tracing many light paths through the underlying fiber geometry. We then employ lightweight neural networks to accurately model the aggregated BSDF, which allows for the precise modeling of a diverse array of materials while offering substantial improvements in speed and reductions in memory. Furthermore, we introduce a novel importance sampling scheme to further speed up the rate of convergence. We validate the efficacy and versatility of our framework through comparisons with preceding fiber-based shading models as well as the most recent yarn-based model.

多年来,梭织和针织面料的逼真渲染一直是一项重大挑战。此前,基于纤维的微观外观模型在实现高水平的逼真度方面取得了相当大的成功。然而,由于纱线中数百根纤维的内部散布错综复杂,渲染此类模型仍然十分复杂,需要大量内存和时间。在本文中,我们引入了一个新的框架,通过追踪穿过底层纤维几何图形的许多光路来捕捉聚集外观。然后,我们采用轻量级神经网络对聚集的 BSDF 进行精确建模,从而可以对各种材料进行精确建模,同时大幅提高速度并减少内存。此外,我们还引入了一种新颖的重要性采样方案,以进一步加快收敛速度。我们通过与之前基于纤维的遮光模型以及最新的基于纱线的模型进行比较,验证了我们框架的有效性和多功能性。
{"title":"Neural Appearance Model for Cloth Rendering","authors":"G. Y. Soh,&nbsp;Z. Montazeri","doi":"10.1111/cgf.15156","DOIUrl":"10.1111/cgf.15156","url":null,"abstract":"<div>\u0000 <p>The realistic rendering of woven and knitted fabrics has posed significant challenges throughout many years. Previously, fiber-based micro-appearance models have achieved considerable success in attaining high levels of realism. However, rendering such models remains complex due to the intricate internal scatterings of hundreds of fibers within a yarn, requiring vast amounts of memory and time to render. In this paper, we introduce a new framework to capture aggregated appearance by tracing many light paths through the underlying fiber geometry. We then employ lightweight neural networks to accurately model the aggregated BSDF, which allows for the precise modeling of a diverse array of materials while offering substantial improvements in speed and reductions in memory. Furthermore, we introduce a novel importance sampling scheme to further speed up the rate of convergence. We validate the efficacy and versatility of our framework through comparisons with preceding fiber-based shading models as well as the most recent yarn-based model.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15156","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to Rasterize Differentiably 学习不同的光栅化
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15145
C. Wu, H. Mailee, Z. Montazeri, T. Ritschel

Differentiable rasterization changes the standard formulation of primitive rasterization — by enabling gradient flow from a pixel to its underlying triangles — using distribution functions in different stages of rendering, creating a “soft” version of the original rasterizer. However, choosing the optimal softening function that ensures the best performance and convergence to a desired goal requires trial and error. Previous work has analyzed and compared several combinations of softening. In this work, we take it a step further and, instead of making a combinatorial choice of softening operations, parameterize the continuous space of common softening operations. We study meta-learning tunable softness functions over a set of inverse rendering tasks (2D and 3D shape, pose and occlusion) so it generalizes to new and unseen differentiable rendering tasks with optimal softness.

可微分光栅化改变了原始光栅化的标准表述--通过在渲染的不同阶段使用分布函数,实现从像素到其底层三角形的梯度流,从而创建出原始光栅化的 "软 "版本。然而,要选择最佳的软化函数,以确保最佳性能并收敛到预期目标,需要反复试验。之前的工作已经分析和比较了几种软化组合。在这项工作中,我们更进一步,不是对软化操作进行组合选择,而是对常见软化操作的连续空间进行参数化。我们在一组反渲染任务(二维和三维形状、姿势和遮挡)上研究元学习可调柔化函数,从而将其推广到具有最佳柔化效果的新的、未见的可微分渲染任务中。
{"title":"Learning to Rasterize Differentiably","authors":"C. Wu,&nbsp;H. Mailee,&nbsp;Z. Montazeri,&nbsp;T. Ritschel","doi":"10.1111/cgf.15145","DOIUrl":"10.1111/cgf.15145","url":null,"abstract":"<p>Differentiable rasterization changes the standard formulation of primitive rasterization — by enabling gradient flow from a pixel to its underlying triangles — using distribution functions in different stages of rendering, creating a “soft” version of the original rasterizer. However, choosing the optimal softening function that ensures the best performance and convergence to a desired goal requires trial and error. Previous work has analyzed and compared several combinations of softening. In this work, we take it a step further and, instead of making a combinatorial choice of softening operations, parameterize the continuous space of common softening operations. We study meta-learning tunable softness functions over a set of inverse rendering tasks (2D and 3D shape, pose and occlusion) so it generalizes to new and unseen differentiable rendering tasks with optimal softness.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MatUp: Repurposing Image Upsamplers for SVBRDFs MatUp:为 SVBRDF 重用图像升维器
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15151
A. Gauthier, B. Kerbl, J. Levallois, R. Faury, J. M. Thiery, T. Boubekeur

We propose MatUp, an upsampling filter for material super-resolution. Our method takes as input a low-resolution SVBRDF and upscales its maps so that their rendering under various lighting conditions fits upsampled renderings inferred in the radiance domain with pre-trained RGB upsamplers. We formulate our local filter as a compact Multilayer Perceptron (MLP), which acts on a small window of the input SVBRDF and is optimized using a data-fitting loss defined over upsampled radiance at various locations. This optimization is entirely performed at the scale of a single, independent material. Doing so, MatUp leverages the reconstruction capabilities acquired over large collections of natural images by pre-trained RGB models and provides regularization over self-similar structures. In particular, our light-weight neural filter avoids retraining complex architectures from scratch or accessing any large collection of low/high resolution material pairs – which do not actually exist at the scale RGB upsamplers are trained with. As a result, MatUp provides fine and coherent details in the upscaled material maps, as shown in the extensive evaluation we provide.

我们提出的 MatUp 是一种用于材质超分辨率的上采样滤波器。我们的方法将低分辨率 SVBRDF 作为输入,并对其贴图进行升频,使其在各种光照条件下的渲染效果符合使用预训练 RGB 升频器在辐射域推断出的升频渲染效果。我们将本地滤波器设计为一个紧凑的多层感知器(MLP),它作用于输入 SVBRDF 的一个小窗口,并使用定义在不同位置的上采样辐射率上的数据拟合损失进行优化。这种优化完全是在单个独立材料的尺度上进行的。在此过程中,MatUp 利用预先训练的 RGB 模型在大量自然图像集合中获得的重建能力,并对自相似结构进行正则化。特别是,我们的轻量级神经滤波器避免了从头开始重新训练复杂的架构,也避免了访问低/高分辨率材料对的任何大型集合--在 RGB 上采样器的训练规模下,这些材料对实际上并不存在。因此,正如我们提供的大量评估结果所显示的那样,MatUp 可以在放大的材质映射中提供精细、连贯的细节。
{"title":"MatUp: Repurposing Image Upsamplers for SVBRDFs","authors":"A. Gauthier,&nbsp;B. Kerbl,&nbsp;J. Levallois,&nbsp;R. Faury,&nbsp;J. M. Thiery,&nbsp;T. Boubekeur","doi":"10.1111/cgf.15151","DOIUrl":"10.1111/cgf.15151","url":null,"abstract":"<p>We propose M<span>at</span>U<span>p</span>, an upsampling filter for material super-resolution. Our method takes as input a low-resolution SVBRDF and upscales its maps so that their rendering under various lighting conditions fits upsampled renderings inferred in the radiance domain with pre-trained RGB upsamplers. We formulate our local filter as a compact Multilayer Perceptron (MLP), which acts on a small window of the input SVBRDF and is optimized using a data-fitting loss defined over upsampled radiance at various locations. This optimization is entirely performed at the scale of a single, independent material. Doing so, M<span>at</span>U<span>p</span> leverages the reconstruction capabilities acquired over large collections of natural images by pre-trained RGB models and provides regularization over self-similar structures. In particular, our light-weight neural filter avoids retraining complex architectures from scratch or accessing any large collection of low/high resolution material pairs – which do not actually exist at the scale RGB upsamplers are trained with. As a result, M<span>at</span>U<span>p</span> provides fine and coherent details in the upscaled material maps, as shown in the extensive evaluation we provide.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lossless Basis Expansion for Gradient-Domain Rendering 梯度域渲染的无损基础扩展
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1111/cgf.15153
Q. Fang, T. Hachisuka

Gradient-domain rendering utilizes difference estimates with shift mapping to reduce variance in Monte Carlo rendering. Such difference estimates are effective under the assumption that pixels for difference estimates have similar integrands. This assumption is often violated because it is common to have spatially varying BSDFs with material maps, which potentially result in a very different integrand per pixel. We introduce an extension of gradient-domain rendering that effectively supports such per-pixel variation in BSDFs based on basis expansion. Basis expansion for BSDFs has been used extensively in other problems in rendering, where the goal is to approximate a given BSDF by a weighted sum of predefined basis functions. We instead utilize lossless basis expansion, representing a BSDF without any approximation by adding the remaining difference in the original basis expansion. This lossless basis expansion allows us to cancel more terms via shift mapping, resulting in low variance difference estimates even with per-pixel BSDF variation. We also extend the Poisson reconstruction process to support this basis expansion. Regular gradient-domain rendering can be expressed as a special case of our extension, where the basis is simply the BSDF per pixel (i.e., no basis expansion). We provide proof-of-concept experiments and showcase the effectiveness of our method for scenes with highly varying material maps. Our results show noticeable improvement over regular gradient-domain rendering under both L1 and L2 reconstructions. The resulting formulation via basis expansion essentially serves as a new way of path reuse among pixels in the presence of per-pixel variation.

梯度域渲染利用带有移位映射的差值估计来减少蒙特卡罗渲染中的差异。这种差值估计有效的前提是差值估计的像素具有相似的积分。但这一假设经常被违反,因为具有材料贴图的空间变化 BSDFs 很常见,这可能导致每个像素的积分非常不同。我们介绍了梯度域渲染的一种扩展方法,它能有效支持基于基础扩展的 BSDF 的这种每像素变化。BSDF 的基扩展已广泛应用于渲染中的其他问题,其目标是通过预定义基函数的加权和来逼近给定的 BSDF。而我们采用的是无损基扩展,通过添加原始基扩展中的剩余差值来表示 BSDF,而不进行任何近似。这种无损基扩展允许我们通过移位映射来抵消更多的项,从而获得低方差的差异估计值,即使每个像素的 BSDF 存在差异。我们还扩展了泊松重建过程,以支持这种基础扩展。常规梯度域渲染可以表示为我们扩展的一个特例,其中的基础只是每个像素的 BSDF(即没有基础扩展)。我们提供了概念验证实验,并展示了我们的方法在具有高度变化的材质贴图的场景中的有效性。我们的结果表明,在 L1 和 L2 重构下,我们的方法比常规梯度域渲染方法有明显的改进。在每个像素都存在变化的情况下,通过基础扩展得出的表述基本上可以作为像素间路径重用的一种新方法。
{"title":"Lossless Basis Expansion for Gradient-Domain Rendering","authors":"Q. Fang,&nbsp;T. Hachisuka","doi":"10.1111/cgf.15153","DOIUrl":"10.1111/cgf.15153","url":null,"abstract":"<p>Gradient-domain rendering utilizes difference estimates with shift mapping to reduce variance in Monte Carlo rendering. Such difference estimates are effective under the assumption that pixels for difference estimates have similar integrands. This assumption is often violated because it is common to have spatially varying BSDFs with material maps, which potentially result in a very different integrand per pixel. We introduce an extension of gradient-domain rendering that effectively supports such per-pixel variation in BSDFs based on basis expansion. Basis expansion for BSDFs has been used extensively in other problems in rendering, where the goal is to approximate a given BSDF by a weighted sum of predefined basis functions. We instead utilize lossless basis expansion, representing a BSDF without any approximation by adding the remaining difference in the original basis expansion. This lossless basis expansion allows us to cancel more terms via shift mapping, resulting in low variance difference estimates even with per-pixel BSDF variation. We also extend the Poisson reconstruction process to support this basis expansion. Regular gradient-domain rendering can be expressed as a special case of our extension, where the basis is simply the BSDF per pixel (i.e., no basis expansion). We provide proof-of-concept experiments and showcase the effectiveness of our method for scenes with highly varying material maps. Our results show noticeable improvement over regular gradient-domain rendering under both L<sup>1</sup> and L<sup>2</sup> reconstructions. The resulting formulation via basis expansion essentially serves as a new way of path reuse among pixels in the presence of per-pixel variation.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1