首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
StyleTex: Style Image-Guided Texture Generation for 3D Models StyleTex:三维模型的样式图像引导纹理生成
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687931
Zhiyu Xie, Yuqing Zhang, Xiangjun Tang, Yiqian Wu, Dehan Chen, Gongsheng Li, Xiaogang Jin
Style-guided texture generation aims to generate a texture that is harmonious with both the style of the reference image and the geometry of the input mesh, given a reference style image and a 3D mesh with its text description. Although diffusion-based 3D texture generation methods, such as distillation sampling, have numerous promising applications in stylized games and films, it requires addressing two challenges: 1) decouple style and content completely from the reference image for 3D models, and 2) align the generated texture with the color tone, style of the reference image, and the given text prompt. To this end, we introduce StyleTex, an innovative diffusion-model-based framework for creating stylized textures for 3D models. Our key insight is to decouple style information from the reference image while disregarding content in diffusion-based distillation sampling. Specifically, given a reference image, we first decompose its style feature from the image CLIP embedding by subtracting the embedding's orthogonal projection in the direction of the content feature, which is represented by a text CLIP embedding. Our novel approach to disentangling the reference image's style and content information allows us to generate distinct style and content features. We then inject the style feature into the cross-attention mechanism to incorporate it into the generation process, while utilizing the content feature as a negative prompt to further dissociate content information. Finally, we incorporate these strategies into StyleTex to obtain stylized textures. We utilize Interval Score Matching to address over-smoothness and over-saturation, in combination with a geometry-aware ControlNet that ensures consistent geometry throughout the generative process. The resulting textures generated by StyleTex retain the style of the reference image, while also aligning with the text prompts and intrinsic details of the given 3D mesh. Quantitative and qualitative experiments show that our method outperforms existing baseline methods by a significant margin.
风格引导纹理生成的目的是,在给定参考风格图像和带有文字说明的三维网格的情况下,生成与参考图像的风格和输入网格的几何形状相协调的纹理。尽管基于扩散的三维纹理生成方法(如蒸馏采样)在风格化游戏和电影中有着大量的应用前景,但它需要解决两个难题:1) 将三维模型的风格和内容与参考图像完全分离;2) 使生成的纹理与参考图像的色调、风格和给定的文字提示相一致。为此,我们推出了基于扩散模型的创新框架 StyleTex,用于为 3D 模型创建风格化纹理。我们的主要见解是在基于扩散的蒸馏采样中,将风格信息与参考图像分离,同时忽略内容。具体来说,在给定参考图像的情况下,我们首先从图像 CLIP 嵌入中分解其风格特征,方法是减去嵌入在内容特征方向上的正交投影,内容特征由文本 CLIP 嵌入表示。我们采用新颖的方法来分离参考图像的风格和内容信息,从而生成不同的风格和内容特征。然后,我们将风格特征注入交叉关注机制,将其纳入生成过程,同时利用内容特征作为负面提示,进一步分离内容信息。最后,我们将这些策略整合到 StyleTex 中,从而获得风格化纹理。我们利用区间分数匹配来解决过度平滑和过度饱和的问题,并结合几何感知控制网来确保整个生成过程中几何形状的一致性。StyleTex 生成的纹理既保留了参考图像的风格,又与文字提示和给定 3D 网格的内在细节保持一致。定量和定性实验表明,我们的方法明显优于现有的基线方法。
{"title":"StyleTex: Style Image-Guided Texture Generation for 3D Models","authors":"Zhiyu Xie, Yuqing Zhang, Xiangjun Tang, Yiqian Wu, Dehan Chen, Gongsheng Li, Xiaogang Jin","doi":"10.1145/3687931","DOIUrl":"https://doi.org/10.1145/3687931","url":null,"abstract":"Style-guided texture generation aims to generate a texture that is harmonious with both the style of the reference image and the geometry of the input mesh, given a reference style image and a 3D mesh with its text description. Although diffusion-based 3D texture generation methods, such as distillation sampling, have numerous promising applications in stylized games and films, it requires addressing two challenges: 1) decouple style and content completely from the reference image for 3D models, and 2) align the generated texture with the color tone, style of the reference image, and the given text prompt. To this end, we introduce StyleTex, an innovative diffusion-model-based framework for creating stylized textures for 3D models. Our key insight is to decouple style information from the reference image while disregarding content in diffusion-based distillation sampling. Specifically, given a reference image, we first decompose its style feature from the image CLIP embedding by subtracting the embedding's orthogonal projection in the direction of the content feature, which is represented by a text CLIP embedding. Our novel approach to disentangling the reference image's style and content information allows us to generate distinct style and content features. We then inject the style feature into the cross-attention mechanism to incorporate it into the generation process, while utilizing the content feature as a negative prompt to further dissociate content information. Finally, we incorporate these strategies into StyleTex to obtain stylized textures. We utilize Interval Score Matching to address over-smoothness and over-saturation, in combination with a geometry-aware ControlNet that ensures consistent geometry throughout the generative process. The resulting textures generated by StyleTex retain the style of the reference image, while also aligning with the text prompts and intrinsic details of the given 3D mesh. Quantitative and qualitative experiments show that our method outperforms existing baseline methods by a significant margin.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"99 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GaussianObject: High-Quality 3D Object Reconstruction from Four Views with Gaussian Splatting 高斯对象利用高斯拼接技术从四个视角重建高质量三维物体
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687759
Chen Yang, Sikuang Li, Jiemin Fang, Ruofan Liang, Lingxi Xie, Xiaopeng Zhang, Wei Shen, Qi Tian
Reconstructing and rendering 3D objects from highly sparse views is of critical importance for promoting applications of 3D vision techniques and improving user experience. However, images from sparse views only contain very limited 3D information, leading to two significant challenges: 1) Difficulty in building multi-view consistency as images for matching are too few; 2) Partially omitted or highly compressed object information as view coverage is insufficient. To tackle these challenges, we propose GaussianObject, a framework to represent and render the 3D object with Gaussian splatting that achieves high rendering quality with only 4 input images. We first introduce techniques of visual hull and floater elimination, which explicitly inject structure priors into the initial optimization process to help build multi-view consistency, yielding a coarse 3D Gaussian representation. Then we construct a Gaussian repair model based on diffusion models to supplement the omitted object information, where Gaussians are further refined. We design a self-generating strategy to obtain image pairs for training the repair model. We further design a COLMAP-free variant, where pre-given accurate camera poses are not required, which achieves competitive quality and facilitates wider applications. GaussianObject is evaluated on several challenging datasets, including MipNeRF360, OmniObject3D, OpenIllumination, and our-collected unposed images, achieving superior performance from only four views and significantly outperforming previous SOTA methods.
从高度稀疏的视图中重建和渲染三维物体对于促进三维视觉技术的应用和改善用户体验至关重要。然而,来自稀疏视图的图像只包含非常有限的三维信息,这导致了两个重大挑战:1) 由于用于匹配的图像太少,难以建立多视图一致性;2) 由于视图覆盖范围不足,部分遗漏或高度压缩了物体信息。为了应对这些挑战,我们提出了高斯对象(GaussianObject),这是一种用高斯拼接来表示和渲染三维物体的框架,只需 4 幅输入图像就能达到很高的渲染质量。我们首先引入了视觉船体和漂浮物消除技术,将结构先验明确注入初始优化过程,以帮助建立多视图一致性,从而获得粗略的三维高斯表示。然后,我们构建一个基于扩散模型的高斯修复模型,以补充遗漏的对象信息,并在此基础上进一步完善高斯。我们设计了一种自生成策略,以获得用于训练修复模型的图像对。我们还设计了一种无 COLMAP 的变体,在这种变体中,不需要预先给定精确的相机姿势,从而获得了具有竞争力的质量,并促进了更广泛的应用。我们在几个具有挑战性的数据集上对 GaussianObject 进行了评估,其中包括 MipNeRF360、OmniObject3D、OpenIllumination 和我们收集的未摆放图像,结果显示,仅从四个视角就能获得卓越的性能,明显优于以前的 SOTA 方法。
{"title":"GaussianObject: High-Quality 3D Object Reconstruction from Four Views with Gaussian Splatting","authors":"Chen Yang, Sikuang Li, Jiemin Fang, Ruofan Liang, Lingxi Xie, Xiaopeng Zhang, Wei Shen, Qi Tian","doi":"10.1145/3687759","DOIUrl":"https://doi.org/10.1145/3687759","url":null,"abstract":"Reconstructing and rendering 3D objects from highly sparse views is of critical importance for promoting applications of 3D vision techniques and improving user experience. However, images from sparse views only contain very limited 3D information, leading to two significant challenges: 1) Difficulty in building multi-view consistency as images for matching are too few; 2) Partially omitted or highly compressed object information as view coverage is insufficient. To tackle these challenges, we propose GaussianObject, a framework to represent and render the 3D object with Gaussian splatting that achieves high rendering quality with only 4 input images. We first introduce techniques of visual hull and floater elimination, which explicitly inject structure priors into the initial optimization process to help build multi-view consistency, yielding a coarse 3D Gaussian representation. Then we construct a Gaussian repair model based on diffusion models to supplement the omitted object information, where Gaussians are further refined. We design a self-generating strategy to obtain image pairs for training the repair model. We further design a COLMAP-free variant, where pre-given accurate camera poses are not required, which achieves competitive quality and facilitates wider applications. GaussianObject is evaluated on several challenging datasets, including MipNeRF360, OmniObject3D, OpenIllumination, and our-collected unposed images, achieving superior performance from only four views and significantly outperforming previous SOTA methods.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"33 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MVImgNet2.0: A Larger-scale Dataset of Multi-view Images MVImgNet2.0:更大规模的多视角图像数据集
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687973
Yushuang Wu, Luyue Shi, Haolin Liu, Hongjie Liao, Lingteng Qiu, Weihao Yuan, Xiaodong Gu, Zilong Dong, Shuguang Cui, Xiaoguang Han
MVImgNet is a large-scale dataset that contains multi-view images of ~220k real-world objects in 238 classes. As a counterpart of ImageNet, it introduces 3D visual signals via multi-view shooting, making a soft bridge between 2D and 3D vision. This paper constructs the MVImgNet2.0 dataset that expands MVImgNet into a total of ~520k objects and 515 categories, which derives a 3D dataset with a larger scale that is more comparable to ones in the 2D domain. In addition to the expanded dataset scale and category range, MVImgNet2.0 is of a higher quality than MVImgNet owing to four new features: (i) most shoots capture 360° views of the objects, which can support the learning of object reconstruction with completeness; (ii) the segmentation manner is advanced to produce foreground object masks of higher accuracy; (iii) a more powerful structure-from-motion method is adopted to derive the camera pose for each frame of a lower estimation error; (iv) higher-quality dense point clouds are reconstructed via advanced methods for objects captured in 360 ° views, which can serve for downstream applications. Extensive experiments confirm the value of the proposed MVImgNet2.0 in boosting the performance of large 3D reconstruction models. MVImgNet2.0 will be public at luyues.github.io/mvimgnet2 , including multi-view images of all 520k objects, the reconstructed high-quality point clouds, and data annotation codes, hoping to inspire the broader vision community.
MVImgNet 是一个大型数据集,包含 238 类约 22 万个真实世界物体的多视角图像。作为 ImageNet 的对应数据集,它通过多视角拍摄引入了三维视觉信号,在二维和三维视觉之间架起了一座软桥梁。本文构建的 MVImgNet2.0 数据集将 MVImgNet 扩展为总共约 52 万个对象和 515 个类别,从而衍生出一个规模更大的三维数据集,与二维领域的数据集更具可比性。除了扩大数据集的规模和类别范围外,MVImgNet2.0 还具有四个新特征,因此比 MVImgNet 质量更高:(i)大多数拍摄都能捕捉到物体的 360° 视图,这可以支持完整的物体重构学习;(ii)先进的分割方式可以生成精度更高的前景物体遮罩;(iii)采用了功能更强大的结构-运动方法,以较低的估计误差推导出每帧的摄像机姿态;(iv)通过先进的方法为 360° 视图中捕捉到的物体重构出更高质量的密集点云,可用于下游应用。广泛的实验证实了建议的 MVImgNet2.0 在提高大型三维重建模型性能方面的价值。MVImgNet2.0 将在 luyues.github.io/mvimgnet2 上公开,包括所有 520k 物体的多视角图像、重建的高质量点云和数据注释代码,希望能给更广泛的视觉社区带来启发。
{"title":"MVImgNet2.0: A Larger-scale Dataset of Multi-view Images","authors":"Yushuang Wu, Luyue Shi, Haolin Liu, Hongjie Liao, Lingteng Qiu, Weihao Yuan, Xiaodong Gu, Zilong Dong, Shuguang Cui, Xiaoguang Han","doi":"10.1145/3687973","DOIUrl":"https://doi.org/10.1145/3687973","url":null,"abstract":"MVImgNet is a large-scale dataset that contains multi-view images of ~220k real-world objects in 238 classes. As a counterpart of ImageNet, it introduces 3D visual signals via multi-view shooting, making a soft bridge between 2D and 3D vision. This paper constructs the MVImgNet2.0 dataset that expands MVImgNet into a total of ~520k objects and 515 categories, which derives a 3D dataset with a larger scale that is more comparable to ones in the 2D domain. In addition to the expanded dataset scale and category range, MVImgNet2.0 is of a higher quality than MVImgNet owing to four new features: (i) most shoots capture 360° views of the objects, which can support the learning of object reconstruction with completeness; (ii) the segmentation manner is advanced to produce foreground object masks of higher accuracy; (iii) a more powerful structure-from-motion method is adopted to derive the camera pose for each frame of a lower estimation error; (iv) higher-quality dense point clouds are reconstructed via advanced methods for objects captured in 360 <jats:sup>°</jats:sup> views, which can serve for downstream applications. Extensive experiments confirm the value of the proposed MVImgNet2.0 in boosting the performance of large 3D reconstruction models. MVImgNet2.0 will be public at <jats:italic>luyues.github.io/mvimgnet2</jats:italic> , including multi-view images of all 520k objects, the reconstructed high-quality point clouds, and data annotation codes, hoping to inspire the broader vision community.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"80 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polarimetric BSSRDF Acquisition of Dynamic Faces 动态人脸的偏振 BSSRDF 采集
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687767
Hyunho Ha, Inseung Hwang, Nestor Monzon, Jaemin Cho, Donggun Kim, Seung-Hwan Baek, Adolfo Muñoz, Diego Gutierrez, Min H. Kim
Acquisition and modeling of polarized light reflection and scattering help reveal the shape, structure, and physical characteristics of an object, which is increasingly important in computer graphics. However, current polarimetric acquisition systems are limited to static and opaque objects. Human faces, on the other hand, present a particularly difficult challenge, given their complex structure and reflectance properties, the strong presence of spatially-varying subsurface scattering, and their dynamic nature. We present a new polarimetric acquisition method for dynamic human faces, which focuses on capturing spatially varying appearance and precise geometry, across a wide spectrum of skin tones and facial expressions. It includes both single and heterogeneous subsurface scattering, index of refraction, and specular roughness and intensity, among other parameters, while revealing biophysically-based components such as inner- and outer-layer hemoglobin, eumelanin and pheomelanin. Our method leverages such components' unique multispectral absorption profiles to quantify their concentrations, which in turn inform our model about the complex interactions occurring within the skin layers. To our knowledge, our work is the first to simultaneously acquire polarimetric and spectral reflectance information alongside biophysically-based skin parameters and geometry of dynamic human faces. Moreover, our polarimetric skin model integrates seamlessly into various rendering pipelines.
偏振光反射和散射的采集和建模有助于揭示物体的形状、结构和物理特征,这在计算机制图中越来越重要。然而,目前的偏振光采集系统仅限于静态和不透明物体。而人脸则是一个特别困难的挑战,因为人脸具有复杂的结构和反射特性、强烈的空间变化次表面散射以及动态特性。我们针对动态人脸提出了一种新的偏振采集方法,该方法侧重于在广泛的肤色和面部表情范围内捕捉空间变化的外观和精确的几何形状。它包括单一和异质次表面散射、折射率、镜面粗糙度和强度等参数,同时揭示了基于生物物理的成分,如内层和外层血红蛋白、黑色素和嗜黑素。我们的方法利用这些成分独特的多光谱吸收曲线来量化它们的浓度,进而为我们的模型提供有关皮肤层内发生的复杂相互作用的信息。据我们所知,我们的工作是首次同时获取偏振和光谱反射信息,以及基于生物物理的皮肤参数和动态人脸的几何形状。此外,我们的偏振皮肤模型可无缝集成到各种渲染管道中。
{"title":"Polarimetric BSSRDF Acquisition of Dynamic Faces","authors":"Hyunho Ha, Inseung Hwang, Nestor Monzon, Jaemin Cho, Donggun Kim, Seung-Hwan Baek, Adolfo Muñoz, Diego Gutierrez, Min H. Kim","doi":"10.1145/3687767","DOIUrl":"https://doi.org/10.1145/3687767","url":null,"abstract":"Acquisition and modeling of polarized light reflection and scattering help reveal the shape, structure, and physical characteristics of an object, which is increasingly important in computer graphics. However, current polarimetric acquisition systems are limited to static and opaque objects. Human faces, on the other hand, present a particularly difficult challenge, given their complex structure and reflectance properties, the strong presence of spatially-varying subsurface scattering, and their dynamic nature. We present a new polarimetric acquisition method for dynamic human faces, which focuses on capturing spatially varying appearance and precise geometry, across a wide spectrum of skin tones and facial expressions. It includes both single and heterogeneous subsurface scattering, index of refraction, and specular roughness and intensity, among other parameters, while revealing biophysically-based components such as inner- and outer-layer hemoglobin, eumelanin and pheomelanin. Our method leverages such components' unique multispectral absorption profiles to quantify their concentrations, which in turn inform our model about the complex interactions occurring within the skin layers. To our knowledge, our work is the first to simultaneously acquire polarimetric and spectral reflectance information alongside biophysically-based skin parameters and geometry of dynamic human faces. Moreover, our polarimetric skin model integrates seamlessly into various rendering pipelines.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"14 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPU Coroutines for Flexible Splitting and Scheduling of Rendering Tasks 用于灵活拆分和调度渲染任务的 GPU 例行程序
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687766
Shaokun Zheng, Xin Chen, Zhong Shi, Ling-Qi Yan, Kun Xu
We introduce coroutines into GPU kernel programming, providing an automated solution for flexible splitting and scheduling of rendering tasks. This approach addresses a prevalent challenge in harnessing the power of modern GPUs for complex, imbalanced graphics workloads like path tracing. Usually, to accommodate the SIMT execution model and latency-hiding architecture, developers have to decompose a monolithic mega-kernel into smaller sub-tasks for improved thread coherence and reduced register pressure. However, involving the handling of intricate nested control flows and numerous interdependent program states, this process can be exceedingly tedious and error-prone when performed manually. Coroutines, a building block for asynchronous programming in many high-level CPU languages, exhibit untapped potential for restructuring GPU kernels due to their versatility in control representation. By extending Luisa [Zheng et al. 2022], we implement an asymmetric, stackless coroutine model with programming language support and multiple built-in schedulers for modern GPUs. To showcase the effectiveness of our model and implementation, we examine them in different application scenarios, including path tracing, SDF rendering, and incorporation with custom passes.
我们在 GPU 内核编程中引入了 coroutines,为灵活拆分和调度渲染任务提供了自动化解决方案。这种方法解决了在利用现代 GPU 的强大功能处理复杂、不平衡的图形工作负载(如路径跟踪)时面临的普遍挑战。通常,为了适应 SIMT 执行模型和延迟隐藏架构,开发人员必须将单片巨型内核分解成较小的子任务,以提高线程一致性并减少寄存器压力。然而,由于需要处理错综复杂的嵌套控制流和大量相互依赖的程序状态,这一过程非常繁琐,而且手动执行时容易出错。Coroutines是许多高级CPU语言中异步编程的构件,由于其在控制表示方面的多样性,它在重构GPU内核方面展现出了尚未开发的潜力。通过扩展 Luisa [Zheng 等人,2022 年],我们为现代 GPU 实现了一个非对称、无堆栈的 CORUTINE 模型,该模型支持编程语言和多个内置调度程序。为了展示我们的模型和实现的有效性,我们在不同的应用场景中对其进行了检验,包括路径追踪、SDF渲染以及与自定义通行证的结合。
{"title":"GPU Coroutines for Flexible Splitting and Scheduling of Rendering Tasks","authors":"Shaokun Zheng, Xin Chen, Zhong Shi, Ling-Qi Yan, Kun Xu","doi":"10.1145/3687766","DOIUrl":"https://doi.org/10.1145/3687766","url":null,"abstract":"We introduce <jats:italic>coroutines</jats:italic> into GPU kernel programming, providing an automated solution for flexible splitting and scheduling of rendering tasks. This approach addresses a prevalent challenge in harnessing the power of modern GPUs for complex, imbalanced graphics workloads like path tracing. Usually, to accommodate the SIMT execution model and latency-hiding architecture, developers have to decompose a monolithic mega-kernel into smaller sub-tasks for improved thread coherence and reduced register pressure. However, involving the handling of intricate nested control flows and numerous interdependent program states, this process can be exceedingly tedious and error-prone when performed manually. Coroutines, a building block for asynchronous programming in many high-level CPU languages, exhibit untapped potential for restructuring GPU kernels due to their versatility in control representation. By extending Luisa [Zheng et al. 2022], we implement an asymmetric, stackless coroutine model with programming language support and multiple built-in schedulers for modern GPUs. To showcase the effectiveness of our model and implementation, we examine them in different application scenarios, including path tracing, SDF rendering, and incorporation with custom passes.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"22 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Large-scale Deformation of Gaussian Splatting 高斯拼接的实时大尺度变形
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687756
Lin Gao, Jie Yang, Bo-Tao Zhang, Jia-Mu Sun, Yu-Jie Yuan, Hongbo Fu, Yu-Kun Lai
Neural implicit representations, including Neural Distance Fields and Neural Radiance Fields, have demonstrated significant capabilities for reconstructing surfaces with complicated geometry and topology, and generating novel views of a scene. Nevertheless, it is challenging for users to directly deform or manipulate these implicit representations with large deformations in a real-time fashion. Gaussian Splatting (GS) has recently become a promising method with explicit geometry for representing static scenes and facilitating high-quality and real-time synthesis of novel views. However, it cannot be easily deformed due to the use of discrete Gaussians and the lack of explicit topology. To address this, we develop a novel GS-based method (GaussianMesh) that enables interactive deformation. Our key idea is to design an innovative mesh-based GS representation, which is integrated into Gaussian learning and manipulation. 3D Gaussians are defined over an explicit mesh, and they are bound with each other: the rendering of 3D Gaussians guides the mesh face split for adaptive refinement, and the mesh face split directs the splitting of 3D Gaussians. Moreover, the explicit mesh constraints help regularize the Gaussian distribution, suppressing poor-quality Gaussians ( e.g. , misaligned Gaussians, long-narrow shaped Gaussians), thus enhancing visual quality and reducing artifacts during deformation. Based on this representation, we further introduce a large-scale Gaussian deformation technique to enable deformable GS, which alters the parameters of 3D Gaussians according to the manipulation of the associated mesh. Our method benefits from existing mesh deformation datasets for more realistic data-driven Gaussian deformation. Extensive experiments show that our approach achieves high-quality reconstruction and effective deformation, while maintaining the promising rendering results at a high frame rate (65 FPS on average on a single commodity GPU).
神经隐式表示(包括神经距离场和神经辐射场)在重建具有复杂几何形状和拓扑结构的表面以及生成场景的新视图方面已显示出强大的能力。然而,对于用户来说,直接对这些隐式表示进行大变形的实时变形或操作是一项挑战。最近,高斯拼接法(GS)已成为一种很有前途的方法,它具有显式几何结构,可用于表示静态场景,并促进高质量和实时合成新视图。然而,由于使用离散高斯和缺乏明确的拓扑结构,这种方法不能轻易变形。为了解决这个问题,我们开发了一种基于高斯的新方法(GaussianMesh),可以实现交互式变形。我们的主要想法是设计一种创新的基于网格的高斯表示法,并将其集成到高斯学习和操作中。三维高斯是在显式网格上定义的,它们之间相互绑定:三维高斯的渲染引导网格面的分割以进行自适应细化,网格面的分割引导三维高斯的分割。此外,明确的网格约束有助于规范高斯分布,抑制劣质高斯(如错位高斯、长窄形高斯),从而提高视觉质量,减少变形时的伪影。在此表示法的基础上,我们进一步引入了大规模高斯变形技术来实现可变形高斯,该技术可根据对相关网格的操作来改变三维高斯的参数。我们的方法得益于现有的网格变形数据集,可实现更逼真的数据驱动高斯变形。广泛的实验表明,我们的方法实现了高质量的重建和有效的变形,同时以较高的帧频(在单个商用 GPU 上平均为 65 FPS)保持了良好的渲染效果。
{"title":"Real-time Large-scale Deformation of Gaussian Splatting","authors":"Lin Gao, Jie Yang, Bo-Tao Zhang, Jia-Mu Sun, Yu-Jie Yuan, Hongbo Fu, Yu-Kun Lai","doi":"10.1145/3687756","DOIUrl":"https://doi.org/10.1145/3687756","url":null,"abstract":"Neural implicit representations, including Neural Distance Fields and Neural Radiance Fields, have demonstrated significant capabilities for reconstructing surfaces with complicated geometry and topology, and generating novel views of a scene. Nevertheless, it is challenging for users to directly deform or manipulate these implicit representations with large deformations in a real-time fashion. Gaussian Splatting (GS) has recently become a promising method with explicit geometry for representing static scenes and facilitating high-quality and real-time synthesis of novel views. However, it cannot be easily deformed due to the use of discrete Gaussians and the lack of explicit topology. To address this, we develop a novel GS-based method (GaussianMesh) that enables interactive deformation. Our key idea is to design an innovative mesh-based GS representation, which is integrated into Gaussian learning and manipulation. 3D Gaussians are defined over an explicit mesh, and they are bound with each other: the rendering of 3D Gaussians guides the mesh face split for adaptive refinement, and the mesh face split directs the splitting of 3D Gaussians. Moreover, the explicit mesh constraints help regularize the Gaussian distribution, suppressing poor-quality Gaussians ( <jats:italic>e.g.</jats:italic> , misaligned Gaussians, long-narrow shaped Gaussians), thus enhancing visual quality and reducing artifacts during deformation. Based on this representation, we further introduce a large-scale Gaussian deformation technique to enable deformable GS, which alters the parameters of 3D Gaussians according to the manipulation of the associated mesh. Our method benefits from existing mesh deformation datasets for more realistic data-driven Gaussian deformation. Extensive experiments show that our approach achieves high-quality reconstruction and effective deformation, while maintaining the promising rendering results at a high frame rate (65 FPS on average on a single commodity GPU).","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"64 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trading Spaces: Adaptive Subspace Time Integration for Contacting Elastodynamics 交易空间:用于接触弹性力学的自适应子空间时间积分法
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687946
Ty Trusty, Yun (Raymond) Fei, David Levin, Danny Kaufman
We construct a subspace simulator that adaptively balances solution improvement against system size. The core components of our simulator are an adaptive subspace oracle, model, and parallel time-step solver algorithm. Our in-time-step adaptivity oracle continually assesses subspace solution quality and candidate update proposals while accounting for temporal variations in deformation and spatial variations in material. In turn our adaptivity model is subspace agnostic. It allows application across subspace representations and expresses unrestricted deformations independent of subspace choice. We couple our oracle and model with a custom-constructed parallel time-step solver for our enriched systems that exposes a pair of user tolerances which provide controllable simulation quality. As tolerances are tightened our model converges to full-space solutions (with expected cost increases). On the other hand, as tolerances are relaxed we obtain output-bound simulation costs. We demonstrate the efficacy of our approach across a wide range of challenging nonlinear materials models, material stiffnesses, heterogeneities, dynamic behaviors, and frictionally contacting conditions, obtaining scalable and efficient simulations of complex elastodynamic scenarios.
我们构建了一个子空间模拟器,它能自适应地平衡解法改进与系统规模之间的关系。我们模拟器的核心组件是自适应子空间神谕、模型和并行时间步求解器算法。我们的时步自适应神谕不断评估子空间解决方案的质量和候选更新建议,同时考虑变形的时间变化和材料的空间变化。反过来,我们的自适应模型与子空间无关。它允许跨子空间表示法应用,并表达与子空间选择无关的无限制变形。我们将甲骨文和模型与为丰富系统定制的并行时间步求解器结合起来,该求解器提供一对用户公差,可控制仿真质量。随着容差的收紧,我们的模型会向全空间解决方案靠拢(预计成本会增加)。另一方面,随着容差的放宽,我们将获得有输出约束的仿真成本。我们展示了我们的方法在各种具有挑战性的非线性材料模型、材料刚度、异质性、动态行为和摩擦接触条件下的功效,从而获得了复杂弹性动力学场景的可扩展高效模拟。
{"title":"Trading Spaces: Adaptive Subspace Time Integration for Contacting Elastodynamics","authors":"Ty Trusty, Yun (Raymond) Fei, David Levin, Danny Kaufman","doi":"10.1145/3687946","DOIUrl":"https://doi.org/10.1145/3687946","url":null,"abstract":"We construct a subspace simulator that adaptively balances solution improvement against system size. The core components of our simulator are an adaptive subspace oracle, model, and parallel time-step solver algorithm. Our in-time-step adaptivity oracle continually assesses subspace solution quality and candidate update proposals while accounting for temporal variations in deformation and spatial variations in material. In turn our adaptivity model is subspace agnostic. It allows application across subspace representations and expresses unrestricted deformations independent of subspace choice. We couple our oracle and model with a custom-constructed parallel time-step solver for our enriched systems that exposes a pair of user tolerances which provide controllable simulation quality. As tolerances are tightened our model converges to full-space solutions (with expected cost increases). On the other hand, as tolerances are relaxed we obtain output-bound simulation costs. We demonstrate the efficacy of our approach across a wide range of challenging nonlinear materials models, material stiffnesses, heterogeneities, dynamic behaviors, and frictionally contacting conditions, obtaining scalable and efficient simulations of complex elastodynamic scenarios.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"69 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SGEdit: Bridging LLM with Text2Image Generative Model for Scene Graph-based Image Editing SGEdit:连接 LLM 与 Text2Image 生成模型,实现基于场景图的图像编辑
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687957
Zhiyuan Zhang, DongDong Chen, Jing Liao
Scene graphs offer a structured, hierarchical representation of images, with nodes and edges symbolizing objects and the relationships among them. It can serve as a natural interface for image editing, dramatically improving precision and flexibility. Leveraging this benefit, we introduce a new framework that integrates large language model (LLM) with Text2Image generative model for scene graph-based image editing. This integration enables precise modifications at the object level and creative recomposition of scenes without compromising overall image integrity. Our approach involves two primary stages: 1) Utilizing a LLM-driven scene parser, we construct an image's scene graph, capturing key objects and their interrelationships, as well as parsing fine-grained attributes such as object masks and descriptions. These annotations facilitate concept learning with a fine-tuned diffusion model, representing each object with an optimized token and detailed description prompt. 2) During the image editing phase, a LLM editing controller guides the edits towards specific areas. These edits are then implemented by an attention-modulated diffusion editor, utilizing the fine-tuned model to perform object additions, deletions, replacements, and adjustments. Through extensive experiments, we demonstrate that our framework significantly outperforms existing image editing methods in terms of editing precision and scene aesthetics. Our code is available at https://bestzzhang.github.io/SGEdit.
场景图提供了一种结构化、分层式的图像表示法,节点和边代表对象以及对象之间的关系。它可以作为图像编辑的自然界面,极大地提高精确度和灵活性。利用这一优势,我们引入了一个新的框架,将大型语言模型(LLM)与 Text2Image 生成模型整合在一起,用于基于场景图的图像编辑。通过这种整合,可以在不影响整体图像完整性的前提下,对对象进行精确修改,并对场景进行创造性的重新组合。我们的方法包括两个主要阶段:1) 利用 LLM 驱动的场景解析器,我们构建图像的场景图,捕捉关键对象及其相互关系,并解析对象遮罩和描述等细粒度属性。这些注释有助于使用微调扩散模型进行概念学习,用优化的标记和详细的描述提示来表示每个物体。2) 在图像编辑阶段,LLM 编辑控制器会引导对特定区域进行编辑。然后,这些编辑由注意力调节扩散编辑器执行,利用微调模型执行对象的添加、删除、替换和调整。通过大量实验,我们证明我们的框架在编辑精度和场景美学方面明显优于现有的图像编辑方法。我们的代码见 https://bestzzhang.github.io/SGEdit。
{"title":"SGEdit: Bridging LLM with Text2Image Generative Model for Scene Graph-based Image Editing","authors":"Zhiyuan Zhang, DongDong Chen, Jing Liao","doi":"10.1145/3687957","DOIUrl":"https://doi.org/10.1145/3687957","url":null,"abstract":"Scene graphs offer a structured, hierarchical representation of images, with nodes and edges symbolizing objects and the relationships among them. It can serve as a natural interface for image editing, dramatically improving precision and flexibility. Leveraging this benefit, we introduce a new framework that integrates large language model (LLM) with Text2Image generative model for scene graph-based image editing. This integration enables precise modifications at the object level and creative recomposition of scenes without compromising overall image integrity. Our approach involves two primary stages: 1) Utilizing a LLM-driven scene parser, we construct an image's scene graph, capturing key objects and their interrelationships, as well as parsing fine-grained attributes such as object masks and descriptions. These annotations facilitate concept learning with a fine-tuned diffusion model, representing each object with an optimized token and detailed description prompt. 2) During the image editing phase, a LLM editing controller guides the edits towards specific areas. These edits are then implemented by an attention-modulated diffusion editor, utilizing the fine-tuned model to perform object additions, deletions, replacements, and adjustments. Through extensive experiments, we demonstrate that our framework significantly outperforms existing image editing methods in terms of editing precision and scene aesthetics. Our code is available at https://bestzzhang.github.io/SGEdit.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"69 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing triangle meshes with controlled roughness 设计具有可控粗糙度的三角形网格
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687940
Victor Ceballos Inza, Panagiotis Fykouras, Florian Rist, Daniel Häseker, Majid Hojjat, Christian Müller, Helmut Pottmann
Motivated by the emergence of rough surfaces in various areas of design, we address the computational design of triangle meshes with controlled roughness. Our focus lies on small levels of roughness. There, roughness or smoothness mainly arises through the local positioning of the mesh edges and faces with respect to the curvature behavior of the reference surface. The analysis of this interaction between curvature and roughness is simplified by a 2D dual diagram and its generation within so-called isotropic geometry, which may be seen as a structure-preserving simplification of Euclidean geometry. Isotropic dihedral angles of the mesh are close to the Euclidean angles and appear as Euclidean edge lengths in the dual diagram, which also serves as a tool for visualization and interactive local design. We present a computational framework that includes appearance-aware remeshing, optimization-based automatic roughening, and control of dihedral angles.
受粗糙表面出现在各个设计领域的启发,我们研究了具有可控粗糙度的三角形网格的计算设计。我们的重点是小粗糙度。在这种情况下,粗糙度或光滑度主要是通过网格边缘和网格面相对于参考曲面曲率行为的局部定位产生的。二维对偶图简化了曲率和粗糙度之间的相互作用分析,并在所谓的各向同性几何中生成,这可以看作是欧几里得几何的结构保留简化。网格的各向同性二面角接近欧几里得角,在对偶图中显示为欧几里得边长,这也是可视化和交互式局部设计的工具。我们提出的计算框架包括外观感知重网格化、基于优化的自动粗化和二面角控制。
{"title":"Designing triangle meshes with controlled roughness","authors":"Victor Ceballos Inza, Panagiotis Fykouras, Florian Rist, Daniel Häseker, Majid Hojjat, Christian Müller, Helmut Pottmann","doi":"10.1145/3687940","DOIUrl":"https://doi.org/10.1145/3687940","url":null,"abstract":"Motivated by the emergence of rough surfaces in various areas of design, we address the computational design of triangle meshes with controlled roughness. Our focus lies on small levels of roughness. There, roughness or smoothness mainly arises through the local positioning of the mesh edges and faces with respect to the curvature behavior of the reference surface. The analysis of this interaction between curvature and roughness is simplified by a 2D dual diagram and its generation within so-called isotropic geometry, which may be seen as a structure-preserving simplification of Euclidean geometry. Isotropic dihedral angles of the mesh are close to the Euclidean angles and appear as Euclidean edge lengths in the dual diagram, which also serves as a tool for visualization and interactive local design. We present a computational framework that includes appearance-aware remeshing, optimization-based automatic roughening, and control of dihedral angles.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"176 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GFFE: G-buffer Free Frame Extrapolation for Low-latency Real-time Rendering GFFE:用于低延迟实时渲染的无 G 缓冲区帧外推法
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687923
Songyin Wu, Deepak Vembar, Anton Sochenov, Selvakumar Panneer, Sungye Kim, Anton Kaplanyan, Ling-Qi Yan
Real-time rendering has been embracing ever-demanding effects, such as ray tracing. However, rendering such effects in high resolution and high frame rate remains challenging. Frame extrapolation methods, which do not introduce additional latency as opposed to frame interpolation methods such as DLSS 3 and FSR 3, boost the frame rate by generating future frames based on previous frames. However, it is a more challenging task because of the lack of information in the disocclusion regions and complex future motions, and recent methods also have a high engine integration cost due to requiring G-buffers as input. We propose a G-buffer free frame extrapolation method, GFFE, with a novel heuristic framework and an efficient neural network, to plausibly generate new frames in real time without introducing additional latency. We analyze the motion of dynamic fragments and different types of disocclusions, and design the corresponding modules of the extrapolation block to handle them. After that, a light-weight shading correction network is used to correct shading and improve overall quality. GFFE achieves comparable or better results than previous interpolation and G-buffer dependent extrapolation methods, with more efficient performance and easier integration.
实时渲染一直在采用光线追踪等要求越来越高的特效。然而,以高分辨率和高帧频渲染此类效果仍然具有挑战性。与 DLSS 3 和 FSR 3 等帧插值方法相比,帧外推法不会带来额外的延迟,它可以根据之前的帧生成未来的帧,从而提高帧频。然而,由于缺乏不闭塞区域的信息和复杂的未来运动,这是一项更具挑战性的任务,而且由于需要 G 缓冲区作为输入,最近的方法还具有较高的引擎集成成本。我们提出了一种无 G 缓冲区的帧外推方法 GFFE,该方法采用新颖的启发式框架和高效的神经网络,可在不引入额外延迟的情况下实时生成新帧。我们分析了动态片段的运动和不同类型的干扰,并设计了外推块的相应模块来处理它们。之后,我们使用轻量级阴影校正网络来校正阴影并提高整体质量。与之前的插值和依赖 G 缓冲区的外推方法相比,GFFE 取得了相当或更好的效果,而且性能更高效、更易于集成。
{"title":"GFFE: G-buffer Free Frame Extrapolation for Low-latency Real-time Rendering","authors":"Songyin Wu, Deepak Vembar, Anton Sochenov, Selvakumar Panneer, Sungye Kim, Anton Kaplanyan, Ling-Qi Yan","doi":"10.1145/3687923","DOIUrl":"https://doi.org/10.1145/3687923","url":null,"abstract":"Real-time rendering has been embracing ever-demanding effects, such as ray tracing. However, rendering such effects in high resolution and high frame rate remains challenging. Frame extrapolation methods, which do not introduce additional latency as opposed to frame interpolation methods such as DLSS 3 and FSR 3, boost the frame rate by generating future frames based on previous frames. However, it is a more challenging task because of the lack of information in the disocclusion regions and complex future motions, and recent methods also have a high engine integration cost due to requiring G-buffers as input. We propose a <jats:italic>G-buffer free</jats:italic> frame extrapolation method, GFFE, with a novel heuristic framework and an efficient neural network, to plausibly generate new frames in real time without introducing additional latency. We analyze the motion of dynamic fragments and different types of disocclusions, and design the corresponding modules of the extrapolation block to handle them. After that, a light-weight shading correction network is used to correct shading and improve overall quality. GFFE achieves comparable or better results than previous interpolation and G-buffer dependent extrapolation methods, with more efficient performance and easier integration.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"99 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1