首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
Controllable Anime Image Editing via Probability of Attribute Tags 通过属性标签概率进行可控动漫图像编辑
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-24 DOI: 10.1111/cgf.15245
Zhenghao Song, Haoran Mo, Chengying Gao

Editing anime images via probabilities of attribute tags allows controlling the degree of the manipulation in an intuitive and convenient manner. Existing methods fall short in the progressive modification and preservation of unintended regions in the input image. We propose a controllable anime image editing framework based on adjusting the tag probabilities, in which a probability encoding network (PEN) is developed to encode the probabilities into features that capture continuous characteristic of the probabilities. Thus, the encoded features are able to direct the generative process of a pre-trained diffusion model and facilitate the linear manipulation. We also introduce a local editing module that automatically identifies the intended regions and constrains the edits to be applied to those regions only, which preserves the others unchanged. Comprehensive comparisons with existing methods indicate the effectiveness of our framework in both one-shot and linear editing modes. Results in additional applications further demonstrate the generalization ability of our approach.

通过属性标签的概率来编辑动漫图像,可以直观方便地控制操作的程度。现有方法在逐步修改和保留输入图像中的非预期区域方面存在不足。我们提出了一个基于调整标签概率的可控动漫图像编辑框架,其中开发了一个概率编码网络(PEN),将概率编码为捕捉概率连续特征的特征。因此,编码后的特征能够指导预先训练好的扩散模型的生成过程,并促进线性操作。我们还引入了一个局部编辑模块,它能自动识别目标区域,并限制只对这些区域进行编辑,而其他区域则保持不变。与现有方法的综合比较表明,我们的框架在单次编辑和线性编辑模式下都很有效。其他应用中的结果进一步证明了我们方法的通用能力。
{"title":"Controllable Anime Image Editing via Probability of Attribute Tags","authors":"Zhenghao Song,&nbsp;Haoran Mo,&nbsp;Chengying Gao","doi":"10.1111/cgf.15245","DOIUrl":"https://doi.org/10.1111/cgf.15245","url":null,"abstract":"<p>Editing anime images via probabilities of attribute tags allows controlling the degree of the manipulation in an intuitive and convenient manner. Existing methods fall short in the progressive modification and preservation of unintended regions in the input image. We propose a controllable anime image editing framework based on adjusting the tag probabilities, in which a probability encoding network (PEN) is developed to encode the probabilities into features that capture continuous characteristic of the probabilities. Thus, the encoded features are able to direct the generative process of a pre-trained diffusion model and facilitate the linear manipulation. We also introduce a local editing module that automatically identifies the intended regions and constrains the edits to be applied to those regions only, which preserves the others unchanged. Comprehensive comparisons with existing methods indicate the effectiveness of our framework in both one-shot and linear editing modes. Results in additional applications further demonstrate the generalization ability of our approach.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seamless and Aligned Texture Optimization for 3D Reconstruction 为三维重建进行无缝对齐纹理优化
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-24 DOI: 10.1111/cgf.15205
Lei Wang, Linlin Ge, Qitong Zhang, Jieqing Feng

Restoring the appearance of the model is a crucial step for achieving realistic 3D reconstruction. High-fidelity textures can also conceal some geometric defects. Since the estimated camera parameters and reconstructed geometry usually contain errors, subsequent texture mapping often suffers from undesirable visual artifacts such as blurring, ghosting, and visual seams. In particular, significant misalignment between the reconstructed model and the registered images will lead to texturing the mesh with inconsistent image regions. However, eliminating various artifacts to generate high-quality textures remains a challenge. In this paper, we address this issue by designing a texture optimization method to generate seamless and aligned textures for 3D reconstruction. The main idea is to detect misalignment regions between images and geometry and exclude them from texture mapping. To handle the texture holes caused by these excluded regions, a cross-patch texture hole-filling method is proposed, which can also synthesize plausible textures for invisible faces. Moreover, for better stitching of the textures from different views, an improved camera pose optimization is present by introducing color adjustment and boundary point sampling. Experimental results show that the proposed method can eliminate the artifacts caused by inaccurate input data robustly and produce high-quality texture results compared with state-of-the-art methods.

恢复模型的外观是实现逼真三维重建的关键一步。高保真纹理还能掩盖一些几何缺陷。由于估计的相机参数和重建的几何图形通常包含误差,因此后续的纹理映射通常会出现不理想的视觉伪影,如模糊、重影和视觉接缝。特别是,重建模型与注册图像之间的严重错位会导致网格纹理与图像区域不一致。然而,消除各种伪像以生成高质量纹理仍然是一个挑战。本文针对这一问题,设计了一种纹理优化方法,为三维重建生成无缝对齐的纹理。其主要思路是检测图像与几何图形之间的错位区域,并将其排除在纹理映射之外。为了处理这些排除区域造成的纹理漏洞,我们提出了一种交叉补丁纹理漏洞填充方法,这种方法还能为不可见的人脸合成可信的纹理。此外,为了更好地拼接来自不同视角的纹理,还通过引入颜色调整和边界点采样改进了相机姿态优化。实验结果表明,与最先进的方法相比,所提出的方法能稳健地消除因输入数据不准确而产生的伪影,并生成高质量的纹理结果。
{"title":"Seamless and Aligned Texture Optimization for 3D Reconstruction","authors":"Lei Wang,&nbsp;Linlin Ge,&nbsp;Qitong Zhang,&nbsp;Jieqing Feng","doi":"10.1111/cgf.15205","DOIUrl":"https://doi.org/10.1111/cgf.15205","url":null,"abstract":"<p>Restoring the appearance of the model is a crucial step for achieving realistic 3D reconstruction. High-fidelity textures can also conceal some geometric defects. Since the estimated camera parameters and reconstructed geometry usually contain errors, subsequent texture mapping often suffers from undesirable visual artifacts such as blurring, ghosting, and visual seams. In particular, significant misalignment between the reconstructed model and the registered images will lead to texturing the mesh with inconsistent image regions. However, eliminating various artifacts to generate high-quality textures remains a challenge. In this paper, we address this issue by designing a texture optimization method to generate seamless and aligned textures for 3D reconstruction. The main idea is to detect misalignment regions between images and geometry and exclude them from texture mapping. To handle the texture holes caused by these excluded regions, a cross-patch texture hole-filling method is proposed, which can also synthesize plausible textures for invisible faces. Moreover, for better stitching of the textures from different views, an improved camera pose optimization is present by introducing color adjustment and boundary point sampling. Experimental results show that the proposed method can eliminate the artifacts caused by inaccurate input data robustly and produce high-quality texture results compared with state-of-the-art methods.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CrystalNet: Texture-Aware Neural Refraction Baking for Global Illumination CrystalNet:全局照明的纹理感知神经折射烘焙
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-24 DOI: 10.1111/cgf.15227
Z. Zhang, E. Simo-Serra

Neural rendering bakes global illumination and other computationally costly effects into the weights of a neural network, allowing to efficiently synthesize photorealistic images without relying on path tracing. In neural rendering approaches, G-buffers obtained from rasterization through direct rendering provide information regarding the scene such as position, normal, and textures to the neural network, achieving accurate and stable rendering quality in real-time. However, due to the use of G-buffers, existing methods struggle to accurately render transparency and refraction effects, as G-buffers do not capture any ray information from multiple light ray bounces. This limitation results in blurriness, distortions, and loss of detail in rendered images that contain transparency and refraction, and is particularly notable in scenes with refracted objects that have high-frequency textures. In this work, we propose a neural network architecture to encode critical rendering information, including texture coordinates from refracted rays, and enable reconstruction of high-frequency textures in areas with refraction. Our approach is able to achieve accurate refraction rendering in challenging scenes with a diversity of overlapping transparent objects. Experimental results demonstrate that our method can interactively render high quality refraction effects with global illumination, unlike existing neural rendering approaches. Our code can be found at https://github.com/ziyangz5/CrystalNet

神经渲染将全局光照和其他计算成本高昂的效果融入神经网络的权重中,从而无需依赖路径追踪就能高效合成逼真的图像。在神经渲染方法中,通过直接渲染光栅化获得的 G 缓冲区为神经网络提供了有关场景的信息,如位置、法线和纹理,从而实现了准确、稳定的实时渲染质量。然而,由于使用 G 缓冲区,现有方法难以准确渲染透明和折射效果,因为 G 缓冲区无法捕捉到多条光线反弹时的任何光线信息。这种局限性导致渲染的包含透明和折射效果的图像模糊、失真和细节缺失,在具有高频纹理的折射物体场景中尤为明显。在这项工作中,我们提出了一种神经网络架构,用于编码关键的渲染信息,包括折射光线的纹理坐标,并在有折射的区域重建高频纹理。我们的方法能够在具有各种重叠透明物体的挑战性场景中实现精确的折射渲染。实验结果表明,与现有的神经渲染方法不同,我们的方法可以交互式地渲染具有全局照明的高质量折射效果。我们的代码见 https://github.com/ziyangz5/CrystalNet
{"title":"CrystalNet: Texture-Aware Neural Refraction Baking for Global Illumination","authors":"Z. Zhang,&nbsp;E. Simo-Serra","doi":"10.1111/cgf.15227","DOIUrl":"https://doi.org/10.1111/cgf.15227","url":null,"abstract":"<p>Neural rendering bakes global illumination and other computationally costly effects into the weights of a neural network, allowing to efficiently synthesize photorealistic images without relying on path tracing. In neural rendering approaches, G-buffers obtained from rasterization through direct rendering provide information regarding the scene such as position, normal, and textures to the neural network, achieving accurate and stable rendering quality in real-time. However, due to the use of G-buffers, existing methods struggle to accurately render transparency and refraction effects, as G-buffers do not capture any ray information from multiple light ray bounces. This limitation results in blurriness, distortions, and loss of detail in rendered images that contain transparency and refraction, and is particularly notable in scenes with refracted objects that have high-frequency textures. In this work, we propose a neural network architecture to encode critical rendering information, including texture coordinates from refracted rays, and enable reconstruction of high-frequency textures in areas with refraction. Our approach is able to achieve accurate refraction rendering in challenging scenes with a diversity of overlapping transparent objects. Experimental results demonstrate that our method can interactively render high quality refraction effects with global illumination, unlike existing neural rendering approaches. Our code can be found at https://github.com/ziyangz5/CrystalNet</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PCLC-Net: Point Cloud Completion in Arbitrary Poses with Learnable Canonical Space PCLC-Net:利用可学习的典型空间完成任意姿态的点云补全
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-24 DOI: 10.1111/cgf.15217
Hanmo Xu, Qingyao Shuai, Xuejin Chen

Recovering the complete structure from partial point clouds in arbitrary poses is challenging. Recently, many efforts have been made to address this problem by developing SO(3)-equivariant completion networks or aligning the partial point clouds with a predefined canonical space before completion. However, these approaches are limited to random rotations only or demand costly pose annotation for model training. In this paper, we present a novel Network for Point cloud Completion with Learnable Canonical space (PCLC-Net) to reduce the need for pose annotations and extract SE(3)-invariant geometry features to improve the completion quality in arbitrary poses. Without pose annotations, our PCLC-Net utilizes self-supervised pose estimation to align the input partial point clouds to a canonical space that is learnable for an object category and subsequently performs shape completion in the learned canonical space. Our PCLC-Net can complete partial point clouds with arbitrary SE(3) poses without requiring pose annotations for supervision. Our PCLC-Net achieves state-of-the-art results on shape completion with arbitrary SE(3) poses on both synthetic and real scanned data. To the best of our knowledge, our method is the first to achieve shape completion in arbitrary poses without pose annotations during network training.

从任意姿态的部分点云中恢复完整结构具有挑战性。最近,很多人通过开发 SO(3)-equivariant 补全网络或在补全之前将部分点云与预定义的典型空间对齐来解决这一问题。然而,这些方法都仅限于随机旋转,或者在模型训练时需要昂贵的姿态注释。在本文中,我们提出了一种新颖的可学习典型空间点云补全网络(PCLC-Net),以减少对姿势注释的需求,并提取 SE(3)-invariant 几何特征,从而提高任意姿势下的补全质量。在没有姿态注释的情况下,我们的 PCLC-Net 利用自监督姿态估计将输入的部分点云对齐到对象类别可学习的规范空间,随后在学习到的规范空间中执行形状补全。我们的 PCLC-Net 可以完成具有任意 SE(3) 姿势的部分点云,而无需姿势注释监督。我们的 PCLC-Net 在合成数据和真实扫描数据的任意 SE(3) 姿态形状补全方面都取得了最先进的成果。据我们所知,我们的方法是第一种在网络训练过程中无需姿势注释就能实现任意姿势形状补全的方法。
{"title":"PCLC-Net: Point Cloud Completion in Arbitrary Poses with Learnable Canonical Space","authors":"Hanmo Xu,&nbsp;Qingyao Shuai,&nbsp;Xuejin Chen","doi":"10.1111/cgf.15217","DOIUrl":"https://doi.org/10.1111/cgf.15217","url":null,"abstract":"<p>Recovering the complete structure from partial point clouds in arbitrary poses is challenging. Recently, many efforts have been made to address this problem by developing SO(3)-equivariant completion networks or aligning the partial point clouds with a predefined canonical space before completion. However, these approaches are limited to random rotations only or demand costly pose annotation for model training. In this paper, we present a novel Network for Point cloud Completion with Learnable Canonical space (PCLC-Net) to reduce the need for pose annotations and extract SE(3)-invariant geometry features to improve the completion quality in arbitrary poses. Without pose annotations, our PCLC-Net utilizes self-supervised pose estimation to align the input partial point clouds to a canonical space that is learnable for an object category and subsequently performs shape completion in the learned canonical space. Our PCLC-Net can complete partial point clouds with arbitrary SE(3) poses without requiring pose annotations for supervision. Our PCLC-Net achieves state-of-the-art results on shape completion with arbitrary SE(3) poses on both synthetic and real scanned data. To the best of our knowledge, our method is the first to achieve shape completion in arbitrary poses without pose annotations during network training.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaussian in the Dark: Real-Time View Synthesis From Inconsistent Dark Images Using Gaussian Splatting 黑暗中的高斯:利用高斯拼接技术从不连贯的黑暗图像中实时合成视图
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-24 DOI: 10.1111/cgf.15213
Sheng Ye, Zhen-Hui Dong, Yubin Hu, Yu-Hui Wen, Yong-Jin Liu

3D Gaussian Splatting has recently emerged as a powerful representation that can synthesize remarkable novel views using consistent multi-view images as input. However, we notice that images captured in dark environments where the scenes are not fully illuminated can exhibit considerable brightness variations and multi-view inconsistency, which poses great challenges to 3D Gaussian Splatting and severely degrades its performance. To tackle this problem, we propose Gaussian-DK. Observing that inconsistencies are mainly caused by camera imaging, we represent a consistent radiance field of the physical world using a set of anisotropic 3D Gaussians, and design a camera response module to compensate for multi-view inconsistencies. We also introduce a step-based gradient scaling strategy to constrain Gaussians near the camera, which turn out to be floaters, from splitting and cloning. Experiments on our proposed benchmark dataset demonstrate that Gaussian-DK produces high-quality renderings without ghosting and floater artifacts and significantly outperforms existing methods. Furthermore, we can also synthesize light-up images by controlling exposure levels that clearly show details in shadow areas.

最近,三维高斯拼接技术(3D Gaussian Splatting)作为一种强大的表示方法出现,它可以使用一致的多视角图像作为输入,合成出引人注目的新颖视图。然而,我们注意到,在黑暗环境中捕获的图像,由于场景未被完全照亮,会表现出相当大的亮度变化和多视角不一致性,这给三维高斯拼接带来了巨大挑战,并严重降低了其性能。为了解决这个问题,我们提出了高斯-DK。考虑到不一致性主要是由相机成像造成的,我们用一组各向异性的三维高斯来表示物理世界的一致辐射场,并设计了一个相机响应模块来补偿多视角不一致性。我们还引入了一种基于阶跃梯度缩放的策略,以限制相机附近的高斯(原来是漂浮物)分裂和克隆。在我们提出的基准数据集上进行的实验表明,Gaussian-DK 能生成没有重影和浮点伪影的高质量渲染图,其性能明显优于现有方法。此外,我们还能通过控制曝光水平合成亮光图像,清晰显示阴影区域的细节。
{"title":"Gaussian in the Dark: Real-Time View Synthesis From Inconsistent Dark Images Using Gaussian Splatting","authors":"Sheng Ye,&nbsp;Zhen-Hui Dong,&nbsp;Yubin Hu,&nbsp;Yu-Hui Wen,&nbsp;Yong-Jin Liu","doi":"10.1111/cgf.15213","DOIUrl":"https://doi.org/10.1111/cgf.15213","url":null,"abstract":"<p>3D Gaussian Splatting has recently emerged as a powerful representation that can synthesize remarkable novel views using consistent multi-view images as input. However, we notice that images captured in dark environments where the scenes are not fully illuminated can exhibit considerable brightness variations and multi-view inconsistency, which poses great challenges to 3D Gaussian Splatting and severely degrades its performance. To tackle this problem, we propose Gaussian-DK. Observing that inconsistencies are mainly caused by camera imaging, we represent a consistent radiance field of the physical world using a set of anisotropic 3D Gaussians, and design a camera response module to compensate for multi-view inconsistencies. We also introduce a step-based gradient scaling strategy to constrain Gaussians near the camera, which turn out to be floaters, from splitting and cloning. Experiments on our proposed benchmark dataset demonstrate that Gaussian-DK produces high-quality renderings without ghosting and floater artifacts and significantly outperforms existing methods. Furthermore, we can also synthesize light-up images by controlling exposure levels that clearly show details in shadow areas.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TempDiff: Enhancing Temporal-awareness in Latent Diffusion for Real-World Video Super-Resolution TempDiff:增强潜在扩散中的时间感知,实现真实世界的视频超分辨率
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-18 DOI: 10.1111/cgf.15211
Q. Jiang, Q.L. Wang, L.H. Chi, X.H. Chen, Q.Y. Zhang, R. Zhou, Z.Q. Deng, J.S. Deng, B.B. Tang, S.H. Lv, J. Liu

Latent diffusion models (LDMs) have demonstrated remarkable success in generative modeling. It is promising to leverage the potential of diffusion priors to enhance performance in image and video tasks. However, applying LDMs to video super-resolution (VSR) presents significant challenges due to the high demands for realistic details and temporal consistency in generated videos, exacerbated by the inherent stochasticity in the diffusion process. In this work, we propose a novel diffusion-based framework, Temporal-awareness Latent Diffusion Model (TempDiff), specifically designed for real-world video super-resolution, where degradations are diverse and complex. TempDiff harnesses the powerful generative prior of a pre-trained diffusion model and enhances temporal awareness through the following mechanisms: 1) Incorporating temporal layers into the denoising U-Net and VAE-Decoder, and fine-tuning these added modules to maintain temporal coherency; 2) Estimating optical flow guidance using a pre-trained flow net for latent optimization and propagation across video sequences, ensuring overall stability in the generated high-quality video. Extensive experiments demonstrate that TempDiff achieves compelling results, outperforming state-of-the-art methods on both synthetic and real-world VSR benchmark datasets. Code will be available at https://github.com/jiangqin567/TempDiff

潜在扩散模型(LDM)在生成模型中取得了显著的成功。利用扩散先验的潜力来提高图像和视频任务的性能是大有可为的。然而,将 LDM 应用于视频超分辨率(VSR)却面临着巨大的挑战,因为生成的视频对真实细节和时间一致性的要求很高,而扩散过程中固有的随机性又加剧了这一挑战。在这项工作中,我们提出了一种新颖的基于扩散的框架--时态感知潜在扩散模型(TempDiff),该框架专为真实世界视频超分辨率而设计,在真实世界中,降解是多样而复杂的。TempDiff 利用预先训练好的扩散模型的强大先验生成功能,通过以下机制增强时间感知能力:1)在去噪 U-Net 和 VAE-Decoder 中加入时间层,并对这些新增模块进行微调,以保持时间一致性;2)使用预先训练好的流网估算光流引导,以进行潜优化和跨视频序列传播,确保生成的高质量视频的整体稳定性。广泛的实验表明,TempDiff 取得了令人瞩目的成果,在合成和实际 VSR 基准数据集上的表现均优于最先进的方法。代码见 https://github.com/jiangqin567/TempDiff
{"title":"TempDiff: Enhancing Temporal-awareness in Latent Diffusion for Real-World Video Super-Resolution","authors":"Q. Jiang,&nbsp;Q.L. Wang,&nbsp;L.H. Chi,&nbsp;X.H. Chen,&nbsp;Q.Y. Zhang,&nbsp;R. Zhou,&nbsp;Z.Q. Deng,&nbsp;J.S. Deng,&nbsp;B.B. Tang,&nbsp;S.H. Lv,&nbsp;J. Liu","doi":"10.1111/cgf.15211","DOIUrl":"https://doi.org/10.1111/cgf.15211","url":null,"abstract":"<p>Latent diffusion models (LDMs) have demonstrated remarkable success in generative modeling. It is promising to leverage the potential of diffusion priors to enhance performance in image and video tasks. However, applying LDMs to video super-resolution (VSR) presents significant challenges due to the high demands for realistic details and temporal consistency in generated videos, exacerbated by the inherent stochasticity in the diffusion process. In this work, we propose a novel diffusion-based framework, Temporal-awareness Latent Diffusion Model (TempDiff), specifically designed for real-world video super-resolution, where degradations are diverse and complex. TempDiff harnesses the powerful generative prior of a pre-trained diffusion model and enhances temporal awareness through the following mechanisms: 1) Incorporating temporal layers into the denoising U-Net and VAE-Decoder, and fine-tuning these added modules to maintain temporal coherency; 2) Estimating optical flow guidance using a pre-trained flow net for latent optimization and propagation across video sequences, ensuring overall stability in the generated high-quality video. Extensive experiments demonstrate that TempDiff achieves compelling results, outperforming state-of-the-art methods on both synthetic and real-world VSR benchmark datasets. Code will be available at https://github.com/jiangqin567/TempDiff</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeuPreSS: Compact Neural Precomputed Subsurface Scattering for Distant Lighting of Heterogeneous Translucent Objects NeuPreSS:用于异质半透明物体远距离照明的紧凑型神经预计算次表面散射法
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-18 DOI: 10.1111/cgf.15234
T. TG, J. R. Frisvad, R. Ramamoorthi, H. W. Jensen

Monte Carlo rendering of translucent objects with heterogeneous scattering properties is often expensive both in terms of memory and computation. If the scattering properties are described by a 3D texture, memory consumption is high. If we do path tracing and use a high dynamic range lighting environment, the computational cost of the rendering can easily become significant. We propose a compact and efficient neural method for representing and rendering the appearance of heterogeneous translucent objects. Instead of assuming only surface variation of optical properties, our method represents the appearance of a full object taking its geometry and volumetric heterogeneities into account. This is similar to a neural radiance field, but our representation works for an arbitrary distant lighting environment. In a sense, we present a version of neural precomputed radiance transfer that captures relighting of heterogeneous translucent objects. We use a multi-layer perceptron (MLP) with skip connections to represent the appearance of an object as a function of spatial position, direction of observation, and direction of incidence. The latter is considered a directional light incident across the entire non-self-shadowed part of the object. We demonstrate the ability of our method to compactly store highly complex materials while having high accuracy when comparing to reference images of the represented object in unseen lighting environments. As compared with path tracing of a heterogeneous light scattering volume behind a refractive interface, our method more easily enables importance sampling of the directions of incidence and can be integrated into existing rendering frameworks while achieving interactive frame rates.

对具有不同散射特性的半透明物体进行蒙特卡罗渲染,通常需要耗费大量内存和计算量。如果散射特性由三维纹理描述,内存消耗就会很高。如果我们进行路径追踪并使用高动态范围照明环境,渲染的计算成本很容易变得很高。我们提出了一种紧凑高效的神经方法,用于表示和渲染异质半透明物体的外观。我们的方法不只假设光学特性的表面变化,而是将整个物体的几何形状和体积异质性考虑在内,表现其外观。这类似于神经辐射场,但我们的表示方法适用于任意的远距离照明环境。从某种意义上说,我们提出了神经预计算辐射度传递的一个版本,可以捕捉异质半透明物体的再照明。我们使用具有跳越连接的多层感知器(MLP),将物体的外观表示为空间位置、观察方向和入射方向的函数。入射方向被认为是入射物体整个非自阴影部分的定向光。我们证明了我们的方法能够紧凑地存储高度复杂的材料,同时与所代表物体在未知照明环境下的参考图像进行比较时具有很高的准确性。与折射界面后的异质光散射体积的路径追踪相比,我们的方法更容易实现入射方向的重要性采样,并可集成到现有的渲染框架中,同时实现交互式帧速率。
{"title":"NeuPreSS: Compact Neural Precomputed Subsurface Scattering for Distant Lighting of Heterogeneous Translucent Objects","authors":"T. TG,&nbsp;J. R. Frisvad,&nbsp;R. Ramamoorthi,&nbsp;H. W. Jensen","doi":"10.1111/cgf.15234","DOIUrl":"https://doi.org/10.1111/cgf.15234","url":null,"abstract":"<div>\u0000 <p>Monte Carlo rendering of translucent objects with heterogeneous scattering properties is often expensive both in terms of memory and computation. If the scattering properties are described by a 3D texture, memory consumption is high. If we do path tracing and use a high dynamic range lighting environment, the computational cost of the rendering can easily become significant. We propose a compact and efficient neural method for representing and rendering the appearance of heterogeneous translucent objects. Instead of assuming only surface variation of optical properties, our method represents the appearance of a full object taking its geometry and volumetric heterogeneities into account. This is similar to a neural radiance field, but our representation works for an arbitrary distant lighting environment. In a sense, we present a version of neural precomputed radiance transfer that captures relighting of heterogeneous translucent objects. We use a multi-layer perceptron (MLP) with skip connections to represent the appearance of an object as a function of spatial position, direction of observation, and direction of incidence. The latter is considered a directional light incident across the entire non-self-shadowed part of the object. We demonstrate the ability of our method to compactly store highly complex materials while having high accuracy when comparing to reference images of the represented object in unseen lighting environments. As compared with path tracing of a heterogeneous light scattering volume behind a refractive interface, our method more easily enables importance sampling of the directions of incidence and can be integrated into existing rendering frameworks while achieving interactive frame rates.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15234","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Spherical Cross-Parameterization for Deforming Characters 变形字符的分层球形交叉参数化
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-19 DOI: 10.1111/cgf.15197
Lizhou Cao, Chao Peng

The demand for immersive technology and realistic virtual environments has created a need for automated solutions to generate characters with morphological variations. However, existing approaches either rely on manual labour or oversimplify the problem by limiting it to static meshes or deformation transfers without shape morphing. In this paper, we propose a new cross-parameterization approach that semi-automates the generation of morphologically diverse characters with synthesized articulations and animations. The main contribution of this work is that our approach parameterizes deforming characters into a novel hierarchical multi-sphere domain, while considering the attributes of mesh topology, deformation and animation. With such a multi-sphere domain, our approach minimizes parametric distortion rates, enhances the bijectivity of parameterization and aligns deforming feature correspondences. The alignment process we propose allows users to focus only on major joint pairs, which is much simpler and more intuitive than the existing alignment solutions that involve a manual process of identifying feature points on mesh surfaces. Compared to recent works, our approach achieves high-quality results in the applications of 3D morphing, texture transfer, character synthesis and deformation transfer.

对身临其境技术和逼真虚拟环境的需求催生了对自动解决方案的需求,以生成具有形态变化的角色。然而,现有的方法要么依赖人工,要么将问题过于简单化,仅限于静态网格或无形态变化的变形转移。在本文中,我们提出了一种新的交叉参数化方法,该方法可半自动生成具有合成关节和动画的形态各异的角色。这项工作的主要贡献在于,我们的方法将变形角色参数化为一个新颖的分层多球域,同时考虑到网格拓扑、变形和动画的属性。有了这样一个多球域,我们的方法就能最大限度地降低参数失真率,增强参数化的拟物性,并对齐变形特征的对应关系。我们提出的对齐过程让用户只需关注主要的关节对,这比现有的对齐解决方案更简单、更直观,因为现有的对齐解决方案需要手动识别网格表面的特征点。与近期的研究相比,我们的方法在三维变形、纹理转移、角色合成和变形转移等应用中取得了高质量的结果。
{"title":"Hierarchical Spherical Cross-Parameterization for Deforming Characters","authors":"Lizhou Cao,&nbsp;Chao Peng","doi":"10.1111/cgf.15197","DOIUrl":"10.1111/cgf.15197","url":null,"abstract":"<p>The demand for immersive technology and realistic virtual environments has created a need for automated solutions to generate characters with morphological variations. However, existing approaches either rely on manual labour or oversimplify the problem by limiting it to static meshes or deformation transfers without shape morphing. In this paper, we propose a new cross-parameterization approach that semi-automates the generation of morphologically diverse characters with synthesized articulations and animations. The main contribution of this work is that our approach parameterizes deforming characters into a novel hierarchical multi-sphere domain, while considering the attributes of mesh topology, deformation and animation. With such a multi-sphere domain, our approach minimizes parametric distortion rates, enhances the bijectivity of parameterization and aligns deforming feature correspondences. The alignment process we propose allows users to focus only on major joint pairs, which is much simpler and more intuitive than the existing alignment solutions that involve a manual process of identifying feature points on mesh surfaces. Compared to recent works, our approach achieves high-quality results in the applications of 3D morphing, texture transfer, character synthesis and deformation transfer.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep SVBRDF Acquisition and Modelling: A Survey 深度 SVBRDF 采集与建模:调查
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-16 DOI: 10.1111/cgf.15199
Behnaz Kavoosighafi, Saghi Hajisharif, Ehsan Miandji, Gabriel Baravdish, Wen Cao, Jonas Unger

Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine-learning-driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high-quality measurements of bi-directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi-directional Reflectance Distribution Functions (SVBRDFs). Learning-based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State-of-the-Art Report (STAR) presents an in-depth overview of the state-of-the-art in machine-learning-driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real-world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at computergraphics.on.liu.se/star_svbrdf_dl/.

随着机器学习、深度学习和生成式人工智能算法和架构的快速发展,图形学界也见证了材料和外观捕捉新技术的显著演变。与传统技术相比,这些机器学习驱动的方法和技术通常只依赖单张或极少数输入图像,同时能够恢复双向反射分布函数的详细、高质量测量值,以及相应的空间变化材料属性,也称为空间变化双向反射分布函数(SVBRDF)。基于学习的外观捕捉方法将在新技术开发中发挥关键作用,这些技术将对几乎所有图形领域产生重大影响。因此,为了促进未来的研究,本《最新进展报告》(STAR)对机器学习驱动的材料捕捉技术的最新进展进行了深入概述,并特别关注 SVBRDF 的获取,因为它在准确模拟真实世界材料的复杂光交互特性方面非常重要。概述包括对当前方法的分类、每种技术的概述、对其功能的评估、采集要求方面的复杂性、计算方面和可用性限制。最后,STAR 展望未来,总结了该领域在预测性和通用外观捕捉方面的研发挑战。本调查中评述的方法和论文的完整列表可在 computergraphics.on.liu.se/star_svbrdf_dl/ 上查阅。
{"title":"Deep SVBRDF Acquisition and Modelling: A Survey","authors":"Behnaz Kavoosighafi,&nbsp;Saghi Hajisharif,&nbsp;Ehsan Miandji,&nbsp;Gabriel Baravdish,&nbsp;Wen Cao,&nbsp;Jonas Unger","doi":"10.1111/cgf.15199","DOIUrl":"https://doi.org/10.1111/cgf.15199","url":null,"abstract":"<p>Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine-learning-driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high-quality measurements of bi-directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi-directional Reflectance Distribution Functions (SVBRDFs). Learning-based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State-of-the-Art Report (STAR) presents an in-depth overview of the state-of-the-art in machine-learning-driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real-world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at computergraphics.on.liu.se/star_svbrdf_dl/.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15199","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142320558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EBPVis: Visual Analytics of Economic Behavior Patterns in a Virtual Experimental Environment EBPVis:虚拟实验环境中经济行为模式的可视化分析
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-13 DOI: 10.1111/cgf.15200
Yuhua Liu, Yuming Ma, Qing Shi, Jin Wen, Wanjun Zheng, Xuanwu Yue, Hang Ye, Wei Chen, Yuwei Meng, Zhiguang Zhou

Experimental economics is an important branch of economics to study human behaviours in a controlled laboratory setting or out in the field. Scientific experiments are conducted in experimental economics to collect what decisions people make in specific circumstances and verify economic theories. As a significant couple of variables in the virtual experimental environment, decisions and outcomes change with the subjective factors of participants and objective circumstances, making it a difficult task to capture human behaviour patterns and establish correlations to verify economic theories. In this paper, we present a visual analytics system, EBPVis, which enables economists to visually explore human behaviour patterns and faithfully verify economic theories, e.g. the vicious cycle of poverty and poverty trap. We utilize a Doc2Vec model to transform the economic behaviours of participants into a vectorized space according to their sequential decisions, where frequent sequences can be easily perceived and extracted to represent human behaviour patterns. To explore the correlation between decisions and outcomes, an Outcome View is designed to display the outcome variables for behaviour patterns. We also provide a Comparison View to support an efficient comparison between multiple behaviour patterns by revealing their differences in terms of decision combinations and time-varying profits. Moreover, an Individual View is designed to illustrate the outcome accumulation and behaviour patterns of subjects. Case studies, expert feedback and user studies based on a real-world dataset have demonstrated the effectiveness and practicability of EBPVis in the representation of economic behaviour patterns and certification of economic theories.

实验经济学是经济学的一个重要分支,用于在受控实验室环境或实地研究人类行为。实验经济学通过科学实验来收集人们在特定情况下做出的决策,并验证经济理论。在虚拟实验环境中,决策和结果是一对重要的变量,会随着参与者的主观因素和客观环境的变化而变化,因此要捕捉人类行为模式并建立相关关系以验证经济理论是一项艰巨的任务。在本文中,我们提出了一个可视化分析系统 EBPVis,它能让经济学家直观地探索人类行为模式,并忠实地验证经济理论,例如贫困的恶性循环和贫困陷阱。我们利用 Doc2Vec 模型将参与者的经济行为根据其顺序决策转化为矢量化空间,在该空间中,频繁序列可以很容易地被感知和提取,以代表人类行为模式。为了探索决策与结果之间的相关性,我们设计了一个结果视图来显示行为模式的结果变量。我们还提供了 "比较视图",通过揭示决策组合和时变利润方面的差异,支持对多种行为模式进行有效比较。此外,我们还设计了一个 "个人视图",用于显示受试者的结果积累和行为模式。基于真实世界数据集的案例研究、专家反馈和用户研究证明了 EBPVis 在表示经济行为模式和认证经济理论方面的有效性和实用性。
{"title":"EBPVis: Visual Analytics of Economic Behavior Patterns in a Virtual Experimental Environment","authors":"Yuhua Liu,&nbsp;Yuming Ma,&nbsp;Qing Shi,&nbsp;Jin Wen,&nbsp;Wanjun Zheng,&nbsp;Xuanwu Yue,&nbsp;Hang Ye,&nbsp;Wei Chen,&nbsp;Yuwei Meng,&nbsp;Zhiguang Zhou","doi":"10.1111/cgf.15200","DOIUrl":"10.1111/cgf.15200","url":null,"abstract":"<p>Experimental economics is an important branch of economics to study human behaviours in a controlled laboratory setting or out in the field. Scientific experiments are conducted in experimental economics to collect what decisions people make in specific circumstances and verify economic theories. As a significant couple of variables in the virtual experimental environment, decisions and outcomes change with the subjective factors of participants and objective circumstances, making it a difficult task to capture human behaviour patterns and establish correlations to verify economic theories. In this paper, we present a visual analytics system, <i>EBPVis</i>, which enables economists to visually explore human behaviour patterns and faithfully verify economic theories, <i>e.g</i>. the vicious cycle of poverty and poverty trap. We utilize a Doc2Vec model to transform the economic behaviours of participants into a vectorized space according to their sequential decisions, where frequent sequences can be easily perceived and extracted to represent human behaviour patterns. To explore the correlation between decisions and outcomes, an Outcome View is designed to display the outcome variables for behaviour patterns. We also provide a Comparison View to support an efficient comparison between multiple behaviour patterns by revealing their differences in terms of decision combinations and time-varying profits. Moreover, an Individual View is designed to illustrate the outcome accumulation and behaviour patterns of subjects. Case studies, expert feedback and user studies based on a real-world dataset have demonstrated the effectiveness and practicability of <i>EBPVis</i> in the representation of economic behaviour patterns and certification of economic theories.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1