首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
Representing Long Volumetric Video with Temporal Gaussian Hierarchy 用时态高斯层次结构表示长体积视频
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687919
Zhen Xu, Yinghao Xu, Zhiyuan Yu, Sida Peng, Jiaming Sun, Hujun Bao, Xiaowei Zhou
This paper aims to address the challenge of reconstructing long volumetric videos from multi-view RGB videos. Recent dynamic view synthesis methods leverage powerful 4D representations, like feature grids or point cloud sequences, to achieve high-quality rendering results. However, they are typically limited to short (1~2s) video clips and often suffer from large memory footprints when dealing with longer videos. To solve this issue, we propose a novel 4D representation, named Temporal Gaussian Hierarchy, to compactly model long volumetric videos. Our key observation is that there are generally various degrees of temporal redundancy in dynamic scenes, which consist of areas changing at different speeds. Motivated by this, our approach builds a multi-level hierarchy of 4D Gaussian primitives, where each level separately describes scene regions with different degrees of content change, and adaptively shares Gaussian primitives to represent unchanged scene content over different temporal segments, thus effectively reducing the number of Gaussian primitives. In addition, the tree-like structure of the Gaussian hierarchy allows us to efficiently represent the scene at a particular moment with a subset of Gaussian primitives, leading to nearly constant GPU memory usage during the training or rendering regardless of the video length. Moreover, we design a Compact Appearance Model that mixes diffuse and view-dependent Gaussians to further minimize the model size while maintaining the rendering quality. We also develop a rasterization pipeline of Gaussian primitives based on the hardware-accelerated technique to improve rendering speed. Extensive experimental results demonstrate the superiority of our method over alternative methods in terms of training cost, rendering speed, and storage usage. To our knowledge, this work is the first approach capable of efficiently handling hours of volumetric video data while maintaining state-of-the-art rendering quality.
本文旨在解决从多视角 RGB 视频重建长体积视频的难题。最近的动态视图合成方法利用强大的 4D 表示(如特征网格或点云序列)来实现高质量的渲染效果。然而,这些方法通常仅限于短视频片段(1~2 秒),在处理较长视频时往往会占用大量内存。为了解决这个问题,我们提出了一种新颖的 4D 表示法,名为时态高斯层次结构(Temporal Gaussian Hierarchy),用于对长体积视频进行紧凑建模。我们的主要观点是,动态场景中通常存在不同程度的时间冗余,这些冗余由以不同速度变化的区域组成。受此启发,我们的方法建立了一个多层级的四维高斯基元层次结构,其中每一层级分别描述内容变化程度不同的场景区域,并自适应地共享高斯基元,以表示不同时间片段上不变的场景内容,从而有效减少了高斯基元的数量。此外,高斯层次结构的树状结构允许我们用高斯基元子集有效地表示特定时刻的场景,从而在训练或渲染过程中,无论视频长度如何,GPU 内存的使用量几乎保持不变。此外,我们还设计了一种紧凑型外观模型,将漫反射高斯和视图相关高斯混合在一起,从而在保持渲染质量的同时进一步减小模型大小。我们还开发了基于硬件加速技术的高斯基元光栅化管道,以提高渲染速度。大量实验结果表明,我们的方法在训练成本、渲染速度和存储使用方面都优于其他方法。据我们所知,这是第一种能够高效处理数小时体积视频数据,同时保持最先进渲染质量的方法。
{"title":"Representing Long Volumetric Video with Temporal Gaussian Hierarchy","authors":"Zhen Xu, Yinghao Xu, Zhiyuan Yu, Sida Peng, Jiaming Sun, Hujun Bao, Xiaowei Zhou","doi":"10.1145/3687919","DOIUrl":"https://doi.org/10.1145/3687919","url":null,"abstract":"This paper aims to address the challenge of reconstructing long volumetric videos from multi-view RGB videos. Recent dynamic view synthesis methods leverage powerful 4D representations, like feature grids or point cloud sequences, to achieve high-quality rendering results. However, they are typically limited to short (1~2s) video clips and often suffer from large memory footprints when dealing with longer videos. To solve this issue, we propose a novel 4D representation, named Temporal Gaussian Hierarchy, to compactly model long volumetric videos. Our key observation is that there are generally various degrees of temporal redundancy in dynamic scenes, which consist of areas changing at different speeds. Motivated by this, our approach builds a multi-level hierarchy of 4D Gaussian primitives, where each level separately describes scene regions with different degrees of content change, and adaptively shares Gaussian primitives to represent unchanged scene content over different temporal segments, thus effectively reducing the number of Gaussian primitives. In addition, the tree-like structure of the Gaussian hierarchy allows us to efficiently represent the scene at a particular moment with a subset of Gaussian primitives, leading to nearly constant GPU memory usage during the training or rendering regardless of the video length. Moreover, we design a Compact Appearance Model that mixes diffuse and view-dependent Gaussians to further minimize the model size while maintaining the rendering quality. We also develop a rasterization pipeline of Gaussian primitives based on the hardware-accelerated technique to improve rendering speed. Extensive experimental results demonstrate the superiority of our method over alternative methods in terms of training cost, rendering speed, and storage usage. To our knowledge, this work is the first approach capable of efficiently handling hours of volumetric video data while maintaining state-of-the-art rendering quality.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"99 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DARTS: Diffusion Approximated Residual Time Sampling for Time-of-flight Rendering in Homogeneous Scattering Media DARTS:用于均质散射介质中飞行时间渲染的扩散近似残差时间采样技术
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687930
Qianyue He, Dongyu Du, Haitian Jiang, Xin Jin
Time-of-flight (ToF) devices have greatly propelled the advancement of various multi-modal perception applications. However, achieving accurate rendering of time-resolved information remains a challenge, particularly in scenes involving complex geometries, diverse materials and participating media. Existing ToF rendering works have demonstrated notable results, yet they struggle with scenes involving scattering media and camera-warped settings. Other steady-state volumetric rendering methods exhibit significant bias or variance when directly applied to ToF rendering tasks. To address these challenges, we integrate transient diffusion theory into path construction and propose novel sampling methods for free-path distance and scattering direction, via resampled importance sampling and offline tabulation. An elliptical sampling method is further adapted to provide controllable vertex connection satisfying any required photon traversal time. In contrast to the existing temporal uniform sampling strategy, our method is the first to consider the contribution of transient radiance to importance-sample the full path, and thus enables improved temporal path construction under multiple scattering settings. The proposed method can be integrated into both path tracing and photon-based frameworks, delivering significant improvements in quality and efficiency with at least a 5x MSE reduction versus SOTA methods in equal rendering time.
飞行时间(ToF)设备极大地推动了各种多模态感知应用的发展。然而,实现时间分辨信息的精确渲染仍然是一项挑战,尤其是在涉及复杂几何形状、不同材料和参与介质的场景中。现有的 ToF 渲染工作已经取得了显著的成果,但在涉及散射介质和相机扭曲设置的场景中仍有困难。其他稳态体积渲染方法在直接应用于 ToF 渲染任务时,会表现出明显的偏差或差异。为了应对这些挑战,我们将瞬态扩散理论融入路径构建中,并通过重采样重要度采样和离线制表,为自由路径距离和散射方向提出了新颖的采样方法。我们进一步调整了椭圆采样方法,以提供可控的顶点连接,满足光子穿越时间的任何要求。与现有的时间均匀采样策略相比,我们的方法首次考虑了瞬态辐射对整个路径重要度采样的贡献,从而改进了多种散射设置下的时间路径构建。所提出的方法可以集成到路径追踪和基于光子的框架中,在同等渲染时间内,与 SOTA 方法相比,质量和效率都有显著提高,MSE 降低了至少 5 倍。
{"title":"DARTS: Diffusion Approximated Residual Time Sampling for Time-of-flight Rendering in Homogeneous Scattering Media","authors":"Qianyue He, Dongyu Du, Haitian Jiang, Xin Jin","doi":"10.1145/3687930","DOIUrl":"https://doi.org/10.1145/3687930","url":null,"abstract":"Time-of-flight (ToF) devices have greatly propelled the advancement of various multi-modal perception applications. However, achieving accurate rendering of time-resolved information remains a challenge, particularly in scenes involving complex geometries, diverse materials and participating media. Existing ToF rendering works have demonstrated notable results, yet they struggle with scenes involving scattering media and camera-warped settings. Other steady-state volumetric rendering methods exhibit significant bias or variance when directly applied to ToF rendering tasks. To address these challenges, we integrate transient diffusion theory into path construction and propose novel sampling methods for free-path distance and scattering direction, via resampled importance sampling and offline tabulation. An elliptical sampling method is further adapted to provide controllable vertex connection satisfying any required photon traversal time. In contrast to the existing temporal uniform sampling strategy, our method is the first to consider the contribution of transient radiance to importance-sample the full path, and thus enables improved temporal path construction under multiple scattering settings. The proposed method can be integrated into both path tracing and photon-based frameworks, delivering significant improvements in quality and efficiency with at least a 5x MSE reduction versus SOTA methods in equal rendering time.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"22 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Medial Skeletal Diagram: A Generalized Medial Axis Approach for Compact 3D Shape Representation 中轴骨骼图:用于紧凑型三维形状表示的通用中轴方法
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687964
Minghao Guo, Bohan Wang, Wojciech Matusik
We propose the Medial Skeletal Diagram, a novel skeletal representation that tackles the prevailing issues around skeleton sparsity and reconstruction accuracy in existing skeletal representations. Our approach augments the continuous elements in the medial axis representation to effectively shift the complexity away from the discrete elements. To that end, we introduce generalized enveloping primitives, an enhancement over the standard primitives in the medial axis, which ensure efficient coverage of intricate local features of the input shape and substantially reduce the number of discrete elements required. Moreover, we present a computational framework for constructing a medial skeletal diagram from an arbitrary closed manifold mesh. Our optimization pipeline ensures that the resulting medial skeletal diagram comprehensively covers the input shape with the fewest primitives. Additionally, each optimized primitive undergoes a post-refinement process to guarantee an accurate match with the source mesh in both geometry and tessellation. We validate our approach on a comprehensive benchmark of 100 shapes, demonstrating the sparsity of the discrete elements and superior reconstruction accuracy across a variety of cases. Finally, we exemplify the versatility of our representation in downstream applications such as shape generation, mesh decomposition, shape optimization, mesh alignment, mesh compression, and user-interactive design.
我们提出了 "中轴骨骼图"(Medial Skeletal Diagram),这是一种新型骨骼表示法,可解决现有骨骼表示法中普遍存在的骨骼稀疏性和重建准确性问题。我们的方法增强了中轴表征中的连续元素,从而有效地将复杂性从离散元素中转移出来。为此,我们引入了广义包络基元,这是对中轴标准基元的增强,可确保有效覆盖输入形状的复杂局部特征,并大幅减少所需的离散元素数量。此外,我们还提出了一个计算框架,用于从任意封闭流形网格构建中轴骨骼图。我们的优化管道可确保生成的中轴骨架图以最少的基元全面覆盖输入形状。此外,每个优化后的基元都要经过后细化过程,以确保在几何和细分方面与源网格精确匹配。我们在 100 个形状的综合基准上验证了我们的方法,证明了离散元素的稀疏性和在各种情况下出色的重建精度。最后,我们举例说明了我们的表示法在形状生成、网格分解、形状优化、网格对齐、网格压缩和用户交互式设计等下游应用中的多功能性。
{"title":"Medial Skeletal Diagram: A Generalized Medial Axis Approach for Compact 3D Shape Representation","authors":"Minghao Guo, Bohan Wang, Wojciech Matusik","doi":"10.1145/3687964","DOIUrl":"https://doi.org/10.1145/3687964","url":null,"abstract":"We propose the Medial Skeletal Diagram, a novel skeletal representation that tackles the prevailing issues around skeleton sparsity and reconstruction accuracy in existing skeletal representations. Our approach augments the continuous elements in the medial axis representation to effectively shift the complexity away from the discrete elements. To that end, we introduce generalized enveloping primitives, an enhancement over the standard primitives in the medial axis, which ensure efficient coverage of intricate local features of the input shape and substantially reduce the number of discrete elements required. Moreover, we present a computational framework for constructing a medial skeletal diagram from an arbitrary closed manifold mesh. Our optimization pipeline ensures that the resulting medial skeletal diagram comprehensively covers the input shape with the fewest primitives. Additionally, each optimized primitive undergoes a post-refinement process to guarantee an accurate match with the source mesh in both geometry and tessellation. We validate our approach on a comprehensive benchmark of 100 shapes, demonstrating the sparsity of the discrete elements and superior reconstruction accuracy across a variety of cases. Finally, we exemplify the versatility of our representation in downstream applications such as shape generation, mesh decomposition, shape optimization, mesh alignment, mesh compression, and user-interactive design.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"10 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Based Toolpath Planner on Diverse Graphs for 3D Printing 用于 3D 打印的基于学习的多样化图形工具路径规划器
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687933
Yuming Huang, Yuhu Guo, Renbo Su, Xingjian Han, Junhao Ding, Tianyu Zhang, Tao Liu, Weiming Wang, Guoxin Fang, Xu Song, Emily Whiting, Charlie Wang
This paper presents a learning based planner for computing optimized 3D printing toolpaths on prescribed graphs, the challenges of which include the varying graph structures on different models and the large scale of nodes & edges on a graph. We adopt an on-the-fly strategy to tackle these challenges, formulating the planner as a Deep Q-Network (DQN) based optimizer to decide the next 'best' node to visit. We construct the state spaces by the Local Search Graph (LSG) centered at different nodes on a graph, which is encoded by a carefully designed algorithm so that LSGs in similar configurations can be identified to re-use the earlier learned DQN priors for accelerating the computation of toolpath planning. Our method can cover different 3D printing applications by defining their corresponding reward functions. Toolpath planning problems in wire-frame printing, continuous fiber printing, and metallic printing are selected to demonstrate its generality. The performance of our planner has been verified by testing the resultant toolpaths in physical experiments. By using our planner, wire-frame models with up to 4.2k struts can be successfully printed, up to 93.3% of sharp turns on continuous fiber toolpaths can be avoided, and the thermal distortion in metallic printing can be reduced by 24.9%.
本文提出了一种基于学习的规划器,用于计算规定图形上的优化 3D 打印工具路径,其挑战包括不同模型上的不同图形结构以及图形上的大规模节点和边。我们采用即时策略来应对这些挑战,将规划器设计为基于深度 Q 网络(DQN)的优化器,以决定下一个要访问的 "最佳 "节点。我们通过以图上不同节点为中心的局部搜索图(LSG)来构建状态空间,并通过精心设计的算法对其进行编码,这样就可以识别出类似配置中的 LSG,从而重新使用先前学习的 DQN 先验,加快工具路径规划的计算速度。通过定义相应的奖励函数,我们的方法可以涵盖不同的 3D 打印应用。我们选择了线框打印、连续纤维打印和金属打印中的工具路径规划问题来证明其通用性。通过在物理实验中测试生成的工具路径,我们的规划器的性能得到了验证。通过使用我们的规划器,可以成功地打印出多达 4.2k 支杆的线框模型,在连续纤维工具路径上可以避免多达 93.3% 的急转弯,在金属打印中可以减少 24.9% 的热变形。
{"title":"Learning Based Toolpath Planner on Diverse Graphs for 3D Printing","authors":"Yuming Huang, Yuhu Guo, Renbo Su, Xingjian Han, Junhao Ding, Tianyu Zhang, Tao Liu, Weiming Wang, Guoxin Fang, Xu Song, Emily Whiting, Charlie Wang","doi":"10.1145/3687933","DOIUrl":"https://doi.org/10.1145/3687933","url":null,"abstract":"This paper presents a learning based planner for computing optimized 3D printing toolpaths on prescribed graphs, the challenges of which include the varying graph structures on different models and the large scale of nodes &amp; edges on a graph. We adopt an on-the-fly strategy to tackle these challenges, formulating the planner as a <jats:italic>Deep Q-Network</jats:italic> (DQN) based optimizer to decide the next 'best' node to visit. We construct the state spaces by the <jats:italic>Local Search Graph</jats:italic> (LSG) centered at different nodes on a graph, which is encoded by a carefully designed algorithm so that LSGs in similar configurations can be identified to re-use the earlier learned DQN priors for accelerating the computation of toolpath planning. Our method can cover different 3D printing applications by defining their corresponding reward functions. Toolpath planning problems in wire-frame printing, continuous fiber printing, and metallic printing are selected to demonstrate its generality. The performance of our planner has been verified by testing the resultant toolpaths in physical experiments. By using our planner, wire-frame models with up to 4.2k struts can be successfully printed, up to 93.3% of sharp turns on continuous fiber toolpaths can be avoided, and the thermal distortion in metallic printing can be reduced by 24.9%.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"38 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volumetric Homogenization for Knitwear Simulation 用于针织品模拟的体积均质化技术
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687911
Chun Yuan, Haoyang Shi, Lei Lan, Yuxing Qiu, Cem Yuksel, Huamin Wang, Chenfanfu Jiang, Kui Wu, Yin Yang
This paper presents volumetric homogenization, a spatially varying homogenization scheme for knitwear simulation. We are motivated by the observation that macro-scale fabric dynamics is strongly correlated with its underlying knitting patterns. Therefore, homogenization towards a single material is less effective when the knitting is complex and non-repetitive. Our method tackles this challenge by homogenizing the yarn-level material locally at volumetric elements. Assigning a virtual volume of a knitting structure enables us to model bending and twisting effects via a simple volume-preserving penalty and thus effectively alleviates the material nonlinearity. We employ an adjoint Gauss-Newton formulation[Zehnder et al. 2021] to battle the dimensionality challenge of such per-element material optimization. This intuitive material model makes the forward simulation GPU-friendly. To this end, our pipeline also equips a novel domain-decomposed subspace solver crafted for GPU projective dynamics, which makes our simulator hundreds of times faster than the yarn-level simulator. Experiments validate the capability and effectiveness of volumetric homogenization. Our method produces realistic animations of knitwear matching the quality of full-scale yarn-level simulations. It is also orders of magnitude faster than existing homogenization techniques in both the training and simulation stages.
本文介绍了一种用于针织模拟的空间变化均质化方案--体积均质化。我们观察到,宏观尺度的织物动态与底层针织模式密切相关。因此,当针织复杂且不重复时,针对单一材料的均质化效果较差。我们的方法通过在体积元素局部均匀化纱线级材料来应对这一挑战。分配针织结构的虚拟体积使我们能够通过简单的体积保留惩罚来模拟弯曲和扭曲效应,从而有效缓解材料的非线性问题。我们采用了一种邻接高斯-牛顿公式[Zehnder 等人,2021 年]来应对这种每元素材料优化的维度挑战。这种直观的材料模型使得前向模拟对 GPU 非常友好。为此,我们的管道还配备了专为 GPU 投影动力学设计的新型域分解子空间求解器,使我们的模拟器比矢量级模拟器快数百倍。实验验证了体积均质化的能力和有效性。我们的方法能生成逼真的针织品动画,其质量可与全尺寸纱线级模拟相媲美。在训练和模拟阶段,它也比现有的均质化技术快几个数量级。
{"title":"Volumetric Homogenization for Knitwear Simulation","authors":"Chun Yuan, Haoyang Shi, Lei Lan, Yuxing Qiu, Cem Yuksel, Huamin Wang, Chenfanfu Jiang, Kui Wu, Yin Yang","doi":"10.1145/3687911","DOIUrl":"https://doi.org/10.1145/3687911","url":null,"abstract":"This paper presents volumetric homogenization, a spatially varying homogenization scheme for knitwear simulation. We are motivated by the observation that macro-scale fabric dynamics is strongly correlated with its underlying knitting patterns. Therefore, homogenization towards a single material is less effective when the knitting is complex and non-repetitive. Our method tackles this challenge by homogenizing the yarn-level material locally at volumetric elements. Assigning a virtual volume of a knitting structure enables us to model bending and twisting effects via a simple volume-preserving penalty and thus effectively alleviates the material nonlinearity. We employ an adjoint Gauss-Newton formulation[Zehnder et al. 2021] to battle the dimensionality challenge of such per-element material optimization. This intuitive material model makes the forward simulation GPU-friendly. To this end, our pipeline also equips a novel domain-decomposed subspace solver crafted for GPU projective dynamics, which makes our simulator hundreds of times faster than the yarn-level simulator. Experiments validate the capability and effectiveness of volumetric homogenization. Our method produces realistic animations of knitwear matching the quality of full-scale yarn-level simulations. It is also orders of magnitude faster than existing homogenization techniques in both the training and simulation stages.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"22 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
All you need is rotation: Construction of developable strips 你需要的只是旋转:建设可开发地带
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687947
Takashi Maekawa, Felix Scholz
We present a novel approach to generate developable strips along a space curve. The key idea of the new method is to use the rotation angle between the Frenet frame of the input space curve, and its Darboux frame of the curve on the resulting developable strip as a free design parameter, thereby revolving the strip around the tangential axis of the input space curve. This angle is not restricted to be constant but it can be any differentiable function defined on the curve, thereby creating a large design space of developable strips that share a common directrix curve. The range of possibilities for choosing the rotation angle is diverse, encompassing constant angles, linearly varying angles, sinusoidal patterns, and even solutions derived from initial value problems involving ordinary differential equations. This enables the potential of the proposed method to be used for a wide range of practical applications, spanning fields such as architectural design, industrial design, and papercraft modeling. In our computational and physical examples, we demonstrate the flexibility of the method by constructing, among others, toroidal and helical windmill blades for papercraft models, curved foldings, triply orthogonal structures, and developable strips featuring a log-aesthetic directrix curve.
我们提出了一种沿空间曲线生成可展开条带的新方法。这种新方法的主要思想是利用输入空间曲线的 Frenet 框架和曲线的 Darboux 框架之间的旋转角度作为自由设计参数,从而使条带围绕输入空间曲线的切向轴旋转。这个角度不一定是常数,也可以是定义在曲线上的任何可微分函数,从而创造出一个共享一条共同方向轴曲线的大型可展开带材设计空间。选择旋转角度的可能性多种多样,包括恒定角度、线性变化角度、正弦模式,甚至包括从涉及常微分方程的初值问题中得出的解决方案。这使得所提出的方法具有广泛的实际应用潜力,横跨建筑设计、工业设计和纸艺建模等领域。在我们的计算和物理示例中,我们通过构建用于纸艺模型的环形和螺旋风车叶片、曲线折叠、三重正交结构以及具有对数美学直角坐标曲线的可展开条带等,展示了该方法的灵活性。
{"title":"All you need is rotation: Construction of developable strips","authors":"Takashi Maekawa, Felix Scholz","doi":"10.1145/3687947","DOIUrl":"https://doi.org/10.1145/3687947","url":null,"abstract":"We present a novel approach to generate developable strips along a space curve. The key idea of the new method is to use the rotation angle between the Frenet frame of the input space curve, and its Darboux frame of the curve on the resulting developable strip as a free design parameter, thereby revolving the strip around the tangential axis of the input space curve. This angle is not restricted to be constant but it can be any differentiable function defined on the curve, thereby creating a large design space of developable strips that share a common directrix curve. The range of possibilities for choosing the rotation angle is diverse, encompassing constant angles, linearly varying angles, sinusoidal patterns, and even solutions derived from initial value problems involving ordinary differential equations. This enables the potential of the proposed method to be used for a wide range of practical applications, spanning fields such as architectural design, industrial design, and papercraft modeling. In our computational and physical examples, we demonstrate the flexibility of the method by constructing, among others, toroidal and helical windmill blades for papercraft models, curved foldings, triply orthogonal structures, and developable strips featuring a log-aesthetic directrix curve.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"10 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes 3D 高斯光线追踪:快速追踪粒子场景
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687934
Nicolas Moenne-Loccoz, Ashkan Mirzaei, Or Perel, Riccardo de Lutio, Janick Martinez Esturo, Gavriel State, Sanja Fidler, Nicholas Sharp, Zan Gojcic
Particle-based representations of radiance fields such as 3D Gaussian Splatting have found great success for reconstructing and re-rendering of complex scenes. Most existing methods render particles via rasterization, projecting them to screen space tiles for processing in a sorted order. This work instead considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance GPU ray tracing hardware. To efficiently handle large numbers of semi-transparent particles, we describe a specialized rendering algorithm which encapsulates particles with bounding meshes to leverage fast ray-triangle intersections, and shades batches of intersections in depth-order. The benefits of ray tracing are well-known in computer graphics: processing incoherent rays for secondary lighting effects such as shadows and reflections, rendering from highly-distorted cameras common in robotics, stochastically sampling rays, and more. With our renderer, this flexibility comes at little cost compared to rasterization. Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision. We further propose related improvements to the basic Gaussian representation, including a simple use of generalized kernel functions which significantly reduces particle hit counts.
基于粒子的辐射场表示法(如三维高斯溅射)在重建和重新渲染复杂场景方面取得了巨大成功。现有的大多数方法都是通过光栅化来渲染粒子,将粒子投射到屏幕空间的瓷砖上,按排序进行处理。而本作品则考虑对粒子进行光线追踪,利用高性能 GPU 光线追踪硬件建立边界体积层次结构,并为每个像素投射光线。为了高效处理大量半透明粒子,我们介绍了一种专门的渲染算法,该算法将粒子与边界网格封装在一起,以利用快速的光线三角形交点,并按深度顺序对交点批次进行阴影处理。光线追踪的优势在计算机图形学中众所周知:处理不连贯光线以获得二次光照效果(如阴影和反射)、从机器人技术中常见的高扭曲摄像头进行渲染、随机光线采样等。与光栅化相比,我们的渲染器只需付出很小的代价就能实现这种灵活性。实验证明了我们方法的速度和准确性,以及在计算机图形学和视觉领域的一些应用。我们进一步提出了对基本高斯表示法的相关改进,包括对广义核函数的简单使用,从而大大减少了粒子的撞击次数。
{"title":"3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes","authors":"Nicolas Moenne-Loccoz, Ashkan Mirzaei, Or Perel, Riccardo de Lutio, Janick Martinez Esturo, Gavriel State, Sanja Fidler, Nicholas Sharp, Zan Gojcic","doi":"10.1145/3687934","DOIUrl":"https://doi.org/10.1145/3687934","url":null,"abstract":"Particle-based representations of radiance fields such as 3D Gaussian Splatting have found great success for reconstructing and re-rendering of complex scenes. Most existing methods render particles via rasterization, projecting them to screen space tiles for processing in a sorted order. This work instead considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance GPU ray tracing hardware. To efficiently handle large numbers of semi-transparent particles, we describe a specialized rendering algorithm which encapsulates particles with bounding meshes to leverage fast ray-triangle intersections, and shades batches of intersections in depth-order. The benefits of ray tracing are well-known in computer graphics: processing incoherent rays for secondary lighting effects such as shadows and reflections, rendering from highly-distorted cameras common in robotics, stochastically sampling rays, and more. With our renderer, this flexibility comes at little cost compared to rasterization. Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision. We further propose related improvements to the basic Gaussian representation, including a simple use of generalized kernel functions which significantly reduces particle hit counts.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"14 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quark: Real-time, High-resolution, and General Neural View Synthesis 夸克实时、高分辨率和通用神经视图合成
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687953
John Flynn, Michael Broxton, Lukas Murmann, Lucy Chai, Matthew DuVall, Clément Godard, Kathryn Heal, Srinivas Kaza, Stephen Lombardi, Xuan Luo, Supreeth Achar, Kira Prabhu, Tiancheng Sun, Lynn Tsai, Ryan Overbeck
We present a novel neural algorithm for performing high-quality, highresolution, real-time novel view synthesis. From a sparse set of input RGB images or videos streams, our network both reconstructs the 3D scene and renders novel views at 1080p resolution at 30fps on an NVIDIA A100. Our feed-forward network generalizes across a wide variety of datasets and scenes and produces state-of-the-art quality for a real-time method. Our quality approaches, and in some cases surpasses, the quality of some of the top offline methods. In order to achieve these results we use a novel combination of several key concepts, and tie them together into a cohesive and effective algorithm. We build on previous works that represent the scene using semi-transparent layers and use an iterative learned render-and-refine approach to improve those layers. Instead of flat layers, our method reconstructs layered depth maps (LDMs) that efficiently represent scenes with complex depth and occlusions. The iterative update steps are embedded in a multi-scale, UNet-style architecture to perform as much compute as possible at reduced resolution. Within each update step, to better aggregate the information from multiple input views, we use a specialized Transformer-based network component. This allows the majority of the per-input image processing to be performed in the input image space, as opposed to layer space, further increasing efficiency. Finally, due to the real-time nature of our reconstruction and rendering, we dynamically create and discard the internal 3D geometry for each frame, generating the LDM for each view. Taken together, this produces a novel and effective algorithm for view synthesis. Through extensive evaluation, we demonstrate that we achieve state-of-the-art quality at real-time rates.
我们介绍了一种用于进行高质量、高分辨率、实时新视角合成的新型神经算法。从一组稀疏的输入 RGB 图像或视频流中,我们的网络既能重建三维场景,又能在英伟达 A100 上以 30fps 的 1080p 分辨率渲染新视图。我们的前馈网络可通用于各种数据集和场景,并为实时方法提供最先进的质量。我们的质量接近并在某些情况下超过了一些顶级离线方法的质量。为了取得这些成果,我们采用了几种关键概念的新颖组合,并将它们结合在一起,形成了一种具有凝聚力的有效算法。我们借鉴了以往使用半透明图层表示场景的方法,并使用迭代学习渲染-精修方法来改进这些图层。我们的方法不使用平面图层,而是重建分层深度图 (LDM),从而有效地表现具有复杂深度和遮挡的场景。迭代更新步骤被嵌入到多尺度、UNet 风格的架构中,以便在降低分辨率的情况下执行尽可能多的计算。在每个更新步骤中,为了更好地汇总来自多个输入视图的信息,我们使用了基于变换器的专用网络组件。这样就可以在输入图像空间(而不是图层空间)执行大部分的每次输入图像处理,从而进一步提高效率。最后,由于我们重建和渲染的实时性,我们为每一帧动态创建和丢弃内部三维几何图形,为每个视图生成 LDM。综上所述,这就产生了一种新颖有效的视图合成算法。通过广泛的评估,我们证明了我们能以实时速率实现最先进的质量。
{"title":"Quark: Real-time, High-resolution, and General Neural View Synthesis","authors":"John Flynn, Michael Broxton, Lukas Murmann, Lucy Chai, Matthew DuVall, Clément Godard, Kathryn Heal, Srinivas Kaza, Stephen Lombardi, Xuan Luo, Supreeth Achar, Kira Prabhu, Tiancheng Sun, Lynn Tsai, Ryan Overbeck","doi":"10.1145/3687953","DOIUrl":"https://doi.org/10.1145/3687953","url":null,"abstract":"We present a novel neural algorithm for performing high-quality, highresolution, real-time novel view synthesis. From a sparse set of input RGB images or videos streams, our network both reconstructs the 3D scene and renders novel views at 1080p resolution at 30fps on an NVIDIA A100. Our feed-forward network generalizes across a wide variety of datasets and scenes and produces state-of-the-art quality for a real-time method. Our quality approaches, and in some cases surpasses, the quality of some of the top offline methods. In order to achieve these results we use a novel combination of several key concepts, and tie them together into a cohesive and effective algorithm. We build on previous works that represent the scene using semi-transparent layers and use an iterative learned render-and-refine approach to improve those layers. Instead of flat layers, our method reconstructs layered depth maps (LDMs) that efficiently represent scenes with complex depth and occlusions. The iterative update steps are embedded in a multi-scale, UNet-style architecture to perform as much compute as possible at reduced resolution. Within each update step, to better aggregate the information from multiple input views, we use a specialized Transformer-based network component. This allows the majority of the per-input image processing to be performed in the input image space, as opposed to layer space, further increasing efficiency. Finally, due to the real-time nature of our reconstruction and rendering, we dynamically create and discard the internal 3D geometry for each frame, generating the LDM for each view. Taken together, this produces a novel and effective algorithm for view synthesis. Through extensive evaluation, we demonstrate that we achieve state-of-the-art quality at real-time rates.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"14 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Kernel Regression for Consistent Monte Carlo Denoising 用于一致蒙特卡罗去噪的神经核回归
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687949
Pengju Qiao, Qi Wang, Yuchi Huo, Shiji Zhai, Zixuan Xie, Wei Hua, Hujun Bao, Tao Liu
Unbiased Monte Carlo path tracing that is extensively used in realistic rendering produces undesirable noise, especially with low samples per pixel (spp). Recently, several methods have coped with this problem by importing unbiased noisy images and auxiliary features to neural networks to either predict a fixed-sized kernel for convolution or directly predict the denoised result. Since it is impossible to produce arbitrarily high spp images as the training dataset, the network-based denoising fails to produce high-quality images under high spp. More specifically, network-based denoising is inconsistent and does not converge to the ground truth as the sampling rate increases. On the other hand, the post-correction estimators yield a blending coefficient for a pair of biased and unbiased images influenced by image errors or variances to ensure the consistency of the denoised image. As the sampling rate increases, the blending coefficient of the unbiased image converges to 1, that is, using the unbiased image as the denoised results. However, these estimators usually produce artifacts due to the difficulty of accurately predicting image errors or variances with low spp. To address the above problems, we take advantage of both kernel-predicting methods and post-correction denoisers. A novel kernel-based denoiser is proposed based on distribution-free kernel regression consistency theory, which does not explicitly combine the biased and unbiased results but constrain the kernel bandwidth to produce consistent results under high spp. Meanwhile, our kernel regression method explores bandwidth optimization in the robust auxiliary feature space instead of the noisy image space. This leads to consistent high-quality denoising at both low and high spp. Experiment results demonstrate that our method outperforms existing denoisers in accuracy and consistency.
在现实渲染中广泛使用的无偏蒙特卡洛路径追踪会产生不理想的噪声,尤其是在每像素采样率(spp)较低的情况下。最近,有几种方法通过向神经网络导入无偏噪声图像和辅助特征来预测用于卷积的固定大小内核或直接预测去噪结果,从而解决了这一问题。由于不可能生成任意高 spp 的图像作为训练数据集,基于网络的去噪无法生成高 spp 下的高质量图像。另一方面,后校正估计器为一对受图像误差或方差影响的有偏和无偏图像生成混合系数,以确保去噪图像的一致性。随着采样率的增加,无偏图像的混合系数会趋近于 1,即使用无偏图像作为去噪结果。为了解决上述问题,我们利用了核预测方法和后校正去噪器。基于无分布内核回归一致性理论,我们提出了一种新的基于内核的去噪器,它并不明确结合有偏和无偏的结果,而是限制内核带宽,以便在高 spp 下产生一致的结果。实验结果表明,我们的方法在准确性和一致性方面都优于现有的去噪方法。
{"title":"Neural Kernel Regression for Consistent Monte Carlo Denoising","authors":"Pengju Qiao, Qi Wang, Yuchi Huo, Shiji Zhai, Zixuan Xie, Wei Hua, Hujun Bao, Tao Liu","doi":"10.1145/3687949","DOIUrl":"https://doi.org/10.1145/3687949","url":null,"abstract":"Unbiased Monte Carlo path tracing that is extensively used in realistic rendering produces undesirable noise, especially with low samples per pixel (spp). Recently, several methods have coped with this problem by importing unbiased noisy images and auxiliary features to neural networks to either predict a fixed-sized kernel for convolution or directly predict the denoised result. Since it is impossible to produce arbitrarily high spp images as the training dataset, the network-based denoising fails to produce high-quality images under high spp. More specifically, network-based denoising is inconsistent and does not converge to the ground truth as the sampling rate increases. On the other hand, the post-correction estimators yield a blending coefficient for a pair of biased and unbiased images influenced by image errors or variances to ensure the consistency of the denoised image. As the sampling rate increases, the blending coefficient of the unbiased image converges to 1, that is, using the unbiased image as the denoised results. However, these estimators usually produce artifacts due to the difficulty of accurately predicting image errors or variances with low spp. To address the above problems, we take advantage of both kernel-predicting methods and post-correction denoisers. A novel kernel-based denoiser is proposed based on distribution-free kernel regression consistency theory, which does not explicitly combine the biased and unbiased results but constrain the kernel bandwidth to produce consistent results under high spp. Meanwhile, our kernel regression method explores bandwidth optimization in the robust auxiliary feature space instead of the noisy image space. This leads to consistent high-quality denoising at both low and high spp. Experiment results demonstrate that our method outperforms existing denoisers in accuracy and consistency.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"197 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chebyshev Parameterization for Woven Fabric Modeling 用于织物建模的切比雪夫参数化
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687928
Annika Öhri, Aviv Segall, Jing Ren, Olga Sorkine-Hornung
Distortion-minimizing surface parameterization is an essential step for computing 2D pieces necessary to fabricate a target 3D shape from flat material. Garment design and textile fabrication are a prominent application example. Common distortion measures quantify length, angle or area preservation in an isotropic manner, so that when applied to woven textile fabrication, they implicitly assume fabric behaves like paper, which is inextensible in all directions and does not permit shearing. However, woven fabric differs significantly from paper: it exhibits anisotropy along the yarn directions and allows for some degree of shearing. We propose a novel distortion energy based on Chebyshev nets that anisotropically penalizes shearing and stretching. Our energy formulation can be used as an optimization objective for surface parameterization and is simple to minimize via a local-global algorithm. We demonstrate its advantages in modeling nets or woven fabric behavior over the commonly used isotropic distortion energies.
畸变最小化表面参数化是计算二维碎片的重要步骤,而计算二维碎片则是利用平面材料制造目标三维形状的必要条件。服装设计和纺织品制造就是一个突出的应用实例。常见的变形测量方法以各向同性的方式量化长度、角度或面积的保持,因此在应用于编织纺织品制造时,它们隐含地假定织物的行为与纸张类似,在所有方向上都无法拉伸,也不允许剪切。然而,机织物与纸张有很大不同:它沿纱线方向呈现各向异性,允许一定程度的剪切。我们提出了一种基于切比雪夫网的新型变形能量,它可以各向异性地惩罚剪切和拉伸。我们的能量公式可用作表面参数化的优化目标,并可通过局部-全局算法实现最小化。与常用的各向同性变形能量相比,我们证明了它在模拟网或编织物行为方面的优势。
{"title":"Chebyshev Parameterization for Woven Fabric Modeling","authors":"Annika Öhri, Aviv Segall, Jing Ren, Olga Sorkine-Hornung","doi":"10.1145/3687928","DOIUrl":"https://doi.org/10.1145/3687928","url":null,"abstract":"Distortion-minimizing surface parameterization is an essential step for computing 2D pieces necessary to fabricate a target 3D shape from flat material. Garment design and textile fabrication are a prominent application example. Common distortion measures quantify length, angle or area preservation in an isotropic manner, so that when applied to woven textile fabrication, they implicitly assume fabric behaves like paper, which is inextensible in all directions and does not permit shearing. However, woven fabric differs significantly from paper: it exhibits anisotropy along the yarn directions and allows for some degree of shearing. We propose a novel distortion energy based on Chebyshev nets that anisotropically penalizes shearing and stretching. Our energy formulation can be used as an optimization objective for surface parameterization and is simple to minimize via a local-global algorithm. We demonstrate its advantages in modeling nets or woven fabric behavior over the commonly used isotropic distortion energies.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"38 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1