首页 > 最新文献

IEEE Transactions on Visualization and Computer Graphics最新文献

英文 中文
IntrinsicNGP: Intrinsic Coordinate based Hash Encoding for Human NeRF IntrinsicNGP:基于内在坐标的人类NeRF哈希编码
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-02-28 DOI: 10.48550/arXiv.2302.14683
Bo Peng, Jun Hu, Jingtao Zhou, Xuan Gao, Ju-yong Zhang
Recently, many works have been proposed to utilize the neural radiance field for novel view synthesis of human performers. However, most of these methods require hours of training, making them difficult for practical use. To address this challenging problem, we propose IntrinsicNGP, which can train from scratch and achieve high-fidelity results in few minutes with videos of a human performer. To achieve this target, we introduce a continuous and optimizable intrinsic coordinate rather than the original explicit Euclidean coordinate in the hash encoding module of instant-NGP. With this novel intrinsic coordinate, IntrinsicNGP can aggregate inter-frame information for dynamic objects with the help of proxy geometry shapes. Moreover, the results trained with the given rough geometry shapes can be further refined with an optimizable offset field based on the intrinsic coordinate. Extensive experimental results on several datasets demonstrate the effectiveness and efficiency of IntrinsicNGP. We also illustrate our approach's ability to edit the shape of reconstructed subjects.
近年来,人们提出了许多利用神经辐射场进行人类表演者新视角合成的研究。然而,这些方法大多需要数小时的训练,使它们难以实际使用。为了解决这个具有挑战性的问题,我们提出了IntrinsicNGP,它可以从零开始训练,并在几分钟内通过人类表演者的视频获得高保真的结果。为了实现这一目标,我们在instant-NGP的哈希编码模块中引入了一个连续的、可优化的内在坐标,而不是原来的显式欧几里德坐标。利用这种新颖的内在坐标,IntrinsicNGP可以借助代理几何形状聚合动态对象的帧间信息。此外,使用给定的粗糙几何形状训练的结果可以进一步细化基于内在坐标的可优化偏移场。在多个数据集上的大量实验结果证明了IntrinsicNGP的有效性和高效性。我们还说明了我们的方法编辑重建主题的形状的能力。
{"title":"IntrinsicNGP: Intrinsic Coordinate based Hash Encoding for Human NeRF","authors":"Bo Peng, Jun Hu, Jingtao Zhou, Xuan Gao, Ju-yong Zhang","doi":"10.48550/arXiv.2302.14683","DOIUrl":"https://doi.org/10.48550/arXiv.2302.14683","url":null,"abstract":"Recently, many works have been proposed to utilize the neural radiance field for novel view synthesis of human performers. However, most of these methods require hours of training, making them difficult for practical use. To address this challenging problem, we propose IntrinsicNGP, which can train from scratch and achieve high-fidelity results in few minutes with videos of a human performer. To achieve this target, we introduce a continuous and optimizable intrinsic coordinate rather than the original explicit Euclidean coordinate in the hash encoding module of instant-NGP. With this novel intrinsic coordinate, IntrinsicNGP can aggregate inter-frame information for dynamic objects with the help of proxy geometry shapes. Moreover, the results trained with the given rough geometry shapes can be further refined with an optimizable offset field based on the intrinsic coordinate. Extensive experimental results on several datasets demonstrate the effectiveness and efficiency of IntrinsicNGP. We also illustrate our approach's ability to edit the shape of reconstructed subjects.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2023-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47324735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MoReVis: A Visual Summary for Spatiotemporal Moving Regions MoReVis:时空运动区域的可视化总结
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-02-26 DOI: 10.48550/arXiv.2302.13199
Giovani Valdrighi, Nivan Ferreira, Jorge Poco
Spatial and temporal interactions are central and fundamental in many activities in our world. A common problem faced when visualizing this type of data is how to provide an overview that helps users navigate efficiently. Traditional approaches use coordinated views or 3D metaphors like the Space-time cube to tackle this problem. However, they suffer from overplotting and often lack spatial context, hindering data exploration. More recent techniques, such as MotionRugs, propose compact temporal summaries based on 1D projection. While powerful, these techniques do not support the situation for which the spatial extent of the objects and their intersections is relevant, such as the analysis of surveillance videos or tracking weather storms. In this paper, we propose MoReVis, a visual overview of spatiotemporal data that considers the objects' spatial extent and strives to show spatial interactions among these objects by displaying spatial intersections. Like previous techniques, our method involves projecting the spatial coordinates to 1D to produce compact summaries. However, our solution's core consists of performing a layout optimization step that sets the size and positions of the visual marks on the summary to resemble the actual values on the original space. We also provide multiple interactive mechanisms to make interpreting the results more straightforward for the user. We perform an extensive experimental evaluation and usage scenarios. Moreover, we evaluated the usefulness of MoReVis in a study with 9 participants. The results point out the effectiveness and suitability of our method in representing different datasets compared to traditional techniques.
空间和时间的相互作用是我们世界上许多活动的中心和基础。可视化这类数据时面临的一个常见问题是如何提供帮助用户有效导航的概览。传统方法使用协调视图或三维隐喻(如时空立方体)来解决这个问题。然而,它们受到过度绘图的困扰,往往缺乏空间背景,阻碍了数据探索。最近的技术,如motionrug,提出了基于一维投影的紧凑时间摘要。虽然功能强大,但这些技术并不支持与物体的空间范围及其相交相关的情况,例如分析监控视频或跟踪天气风暴。在本文中,我们提出了MoReVis,这是一种时空数据的视觉概述,它考虑了物体的空间范围,并通过显示空间交叉点来努力显示这些物体之间的空间相互作用。与以前的技术一样,我们的方法涉及将空间坐标投影到1D以生成紧凑的摘要。然而,我们的解决方案的核心包括执行布局优化步骤,该步骤设置摘要上视觉标记的大小和位置,使其与原始空间上的实际值相似。我们还提供了多种交互机制,使用户能够更直接地解释结果。我们进行了广泛的实验评估和使用场景。此外,我们在一项有9名参与者的研究中评估了MoReVis的有效性。结果表明,与传统方法相比,我们的方法在表示不同数据集方面的有效性和适用性。
{"title":"MoReVis: A Visual Summary for Spatiotemporal Moving Regions","authors":"Giovani Valdrighi, Nivan Ferreira, Jorge Poco","doi":"10.48550/arXiv.2302.13199","DOIUrl":"https://doi.org/10.48550/arXiv.2302.13199","url":null,"abstract":"Spatial and temporal interactions are central and fundamental in many activities in our world. A common problem faced when visualizing this type of data is how to provide an overview that helps users navigate efficiently. Traditional approaches use coordinated views or 3D metaphors like the Space-time cube to tackle this problem. However, they suffer from overplotting and often lack spatial context, hindering data exploration. More recent techniques, such as MotionRugs, propose compact temporal summaries based on 1D projection. While powerful, these techniques do not support the situation for which the spatial extent of the objects and their intersections is relevant, such as the analysis of surveillance videos or tracking weather storms. In this paper, we propose MoReVis, a visual overview of spatiotemporal data that considers the objects' spatial extent and strives to show spatial interactions among these objects by displaying spatial intersections. Like previous techniques, our method involves projecting the spatial coordinates to 1D to produce compact summaries. However, our solution's core consists of performing a layout optimization step that sets the size and positions of the visual marks on the summary to resemble the actual values on the original space. We also provide multiple interactive mechanisms to make interpreting the results more straightforward for the user. We perform an extensive experimental evaluation and usage scenarios. Moreover, we evaluated the usefulness of MoReVis in a study with 9 participants. The results point out the effectiveness and suitability of our method in representing different datasets compared to traditional techniques.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2023-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45632088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LC-NeRF: Local Controllable Face Generation in Neural Randiance Field LC-NeRF:神经距离场的局部可控人脸生成
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-02-19 DOI: 10.48550/arXiv.2302.09486
Wen-Yang Zhou, Lu Yuan, Shu-Yu Chen, Lin Gao, Shimin Hu
3D face generation has achieved high visual quality and 3D consistency thanks to the development of neural radiance fields (NeRF). However, these methods model the whole face as a neural radiance field, which limits the controllability of the local regions. In other words, previous methods struggle to independently control local regions, such as the mouth, nose, and hair. To improve local controllability in NeRF-based face generation, we propose LC-NeRF, which is composed of a Local Region Generators Module (LRGM) and a Spatial-Aware Fusion Module (SAFM), allowing for geometry and texture control of local facial regions. The LRGM models different facial regions as independent neural radiance fields and the SAFM is responsible for merging multiple independent neural radiance fields into a complete representation. Finally, LC-NeRF enables the modification of the latent code associated with each individual generator, thereby allowing precise control over the corresponding local region. Qualitative and quantitative evaluations show that our method provides better local controllability than state-of-the-art 3D-aware face generation methods. A perception study reveals that our method outperforms existing state-of-the-art methods in terms of image quality, face consistency, and editing effects. Furthermore, our method exhibits favorable performance in downstream tasks, including real image editing and text-driven facial image editing.
由于神经辐射场(neural radiance fields, NeRF)的发展,3D人脸生成已经达到了高视觉质量和3D一致性。然而,这些方法将整个脸部建模为一个神经辐射场,这限制了局部区域的可控性。换句话说,以前的方法很难独立控制局部区域,如嘴、鼻子和头发。为了提高基于nerf的人脸生成的局部可控制性,我们提出了LC-NeRF,它由局部区域生成器模块(LRGM)和空间感知融合模块(SAFM)组成,允许局部面部区域的几何和纹理控制。LRGM将不同的面部区域建模为独立的神经辐射场,SAFM负责将多个独立的神经辐射场合并为一个完整的表示。最后,LC-NeRF允许修改与每个单独的生成器相关的潜在代码,从而允许对相应的局部区域进行精确控制。定性和定量评估表明,我们的方法比最先进的3d感知人脸生成方法提供了更好的局部可控性。一项感知研究表明,我们的方法在图像质量、面部一致性和编辑效果方面优于现有的最先进的方法。此外,我们的方法在下游任务中表现出良好的性能,包括真实图像编辑和文本驱动的面部图像编辑。
{"title":"LC-NeRF: Local Controllable Face Generation in Neural Randiance Field","authors":"Wen-Yang Zhou, Lu Yuan, Shu-Yu Chen, Lin Gao, Shimin Hu","doi":"10.48550/arXiv.2302.09486","DOIUrl":"https://doi.org/10.48550/arXiv.2302.09486","url":null,"abstract":"3D face generation has achieved high visual quality and 3D consistency thanks to the development of neural radiance fields (NeRF). However, these methods model the whole face as a neural radiance field, which limits the controllability of the local regions. In other words, previous methods struggle to independently control local regions, such as the mouth, nose, and hair. To improve local controllability in NeRF-based face generation, we propose LC-NeRF, which is composed of a Local Region Generators Module (LRGM) and a Spatial-Aware Fusion Module (SAFM), allowing for geometry and texture control of local facial regions. The LRGM models different facial regions as independent neural radiance fields and the SAFM is responsible for merging multiple independent neural radiance fields into a complete representation. Finally, LC-NeRF enables the modification of the latent code associated with each individual generator, thereby allowing precise control over the corresponding local region. Qualitative and quantitative evaluations show that our method provides better local controllability than state-of-the-art 3D-aware face generation methods. A perception study reveals that our method outperforms existing state-of-the-art methods in terms of image quality, face consistency, and editing effects. Furthermore, our method exhibits favorable performance in downstream tasks, including real image editing and text-driven facial image editing.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2023-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44963218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Audio2Gestures: Generating Diverse Gestures from Audio Audio2Gestures:从音频生成不同的手势
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-01-17 DOI: 10.48550/arXiv.2301.06690
Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Linchao Bao, Zhenyu He
People may perform diverse gestures affected by various mental and physical factors when speaking the same sentences. This inherent one-to-many relationship makes co-speech gesture generation from audio particularly challenging. Conventional CNNs/RNNs assume one-to-one mapping, and thus tend to predict the average of all possible target motions, easily resulting in plain/boring motions during inference. So we propose to explicitly model the one-to-many audio-to-motion mapping by splitting the cross-modal latent code into shared code and motion-specific code. The shared code is expected to be responsible for the motion component that is more correlated to the audio while the motion-specific code is expected to capture diverse motion information that is more independent of the audio. However, splitting the latent code into two parts poses extra training difficulties. Several crucial training losses/strategies, including relaxed motion loss, bicycle constraint, and diversity loss, are designed to better train the VAE. Experiments on both 3D and 2D motion datasets verify that our method generates more realistic and diverse motions than previous state-of-the-art methods, quantitatively and qualitatively. Besides, our formulation is compatible with discrete cosine transformation (DCT) modeling and other popular backbones (i.e. RNN, Transformer). As for motion losses and quantitative motion evaluation, we find structured losses/metrics (e.g. STFT) that consider temporal and/or spatial context complement the most commonly used point-wise losses (e.g. PCK), resulting in better motion dynamics and more nuanced motion details. Finally, we demonstrate that our method can be readily used to generate motion sequences with user-specified motion clips on the timeline.
人们在说同一句话时,可能会受到各种心理和身体因素的影响,做出不同的手势。这种固有的一对多关系使得从音频生成共同语音手势特别具有挑战性。传统的CNN/RNN假设一对一映射,因此倾向于预测所有可能的目标运动的平均值,在推理过程中很容易导致平淡/无聊的运动。因此,我们建议通过将跨模态潜在码划分为共享码和运动特定码来显式地对一对多音频到运动映射进行建模。共享代码被期望负责与音频更相关的运动分量,而运动专用代码被期望捕获更独立于音频的不同运动信息。然而,将潜在代码分为两部分会带来额外的训练困难。几个关键的训练损失/策略,包括放松运动损失、自行车约束和多样性损失,旨在更好地训练VAE。在3D和2D运动数据集上的实验验证了我们的方法在数量和质量上都比以前最先进的方法产生了更真实和多样化的运动。此外,我们的公式与离散余弦变换(DCT)建模和其他流行的主干(即RNN、Transformer)兼容。至于运动损失和定量运动评估,我们发现考虑时间和/或空间上下文的结构化损失/度量(例如STFT)补充了最常用的逐点损失(例如PCK),从而产生更好的运动动力学和更细微的运动细节。最后,我们证明了我们的方法可以很容易地用于在时间线上生成具有用户指定的运动片段的运动序列。
{"title":"Audio2Gestures: Generating Diverse Gestures from Audio","authors":"Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Linchao Bao, Zhenyu He","doi":"10.48550/arXiv.2301.06690","DOIUrl":"https://doi.org/10.48550/arXiv.2301.06690","url":null,"abstract":"People may perform diverse gestures affected by various mental and physical factors when speaking the same sentences. This inherent one-to-many relationship makes co-speech gesture generation from audio particularly challenging. Conventional CNNs/RNNs assume one-to-one mapping, and thus tend to predict the average of all possible target motions, easily resulting in plain/boring motions during inference. So we propose to explicitly model the one-to-many audio-to-motion mapping by splitting the cross-modal latent code into shared code and motion-specific code. The shared code is expected to be responsible for the motion component that is more correlated to the audio while the motion-specific code is expected to capture diverse motion information that is more independent of the audio. However, splitting the latent code into two parts poses extra training difficulties. Several crucial training losses/strategies, including relaxed motion loss, bicycle constraint, and diversity loss, are designed to better train the VAE. Experiments on both 3D and 2D motion datasets verify that our method generates more realistic and diverse motions than previous state-of-the-art methods, quantitatively and qualitatively. Besides, our formulation is compatible with discrete cosine transformation (DCT) modeling and other popular backbones (i.e. RNN, Transformer). As for motion losses and quantitative motion evaluation, we find structured losses/metrics (e.g. STFT) that consider temporal and/or spatial context complement the most commonly used point-wise losses (e.g. PCK), resulting in better motion dynamics and more nuanced motion details. Finally, we demonstrate that our method can be readily used to generate motion sequences with user-specified motion clips on the timeline.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2023-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42570159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
NeRF-Art: Text-Driven Neural Radiance Fields Stylization NeRF-Art:文本驱动的神经辐射领域的风格化
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-12-15 DOI: 10.48550/arXiv.2212.08070
Can Wang, Ruixia Jiang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao
As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains challenging, especially in simulating a text-guided style with both the appearance and the geometry altered simultaneously. In this paper, we present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt. Unlike previous approaches that either lack sufficient geometry deformations and texture details or require meshes to guide the stylization, our method can shift a 3D scene to the target style characterized by desired geometry and appearance variations without any mesh guidance. This is achieved by introducing a novel global-local contrastive learning strategy, combined with the directional constraint to simultaneously control both the trajectory and the strength of the target style. Moreover, we adopt a weight regularization method to effectively suppress cloudy artifacts and geometry noises which arise easily when the density field is transformed during geometry stylization. Through extensive experiments on various styles, we demonstrate that our method is effective and robust regarding both single-view stylization quality and cross-view consistency. The code and more results can be found on our project page: https://cassiepython.github.io/nerfart/.
作为3D场景的强大表示,神经辐射场(NeRF)能够从多视图图像中合成高质量的新视图。然而,NeRF的样式化仍然具有挑战性,尤其是在模拟外观和几何图形同时更改的文本引导样式时。在本文中,我们介绍了NeRF Art,这是一种文本引导的NeRF风格化方法,通过简单的文本提示来操纵预先训练的NeRF模型的风格。与之前缺乏足够的几何变形和纹理细节或需要网格来指导风格化的方法不同,我们的方法可以在没有任何网格指导的情况下将3D场景转换为以所需几何和外观变化为特征的目标样式。这是通过引入一种新的全局-局部对比学习策略来实现的,该策略结合方向约束来同时控制目标风格的轨迹和强度。此外,我们采用了权重正则化方法来有效地抑制几何风格化过程中密度场变换时容易出现的模糊伪影和几何噪声。通过对各种风格的大量实验,我们证明了我们的方法在单视图风格化质量和跨视图一致性方面是有效和稳健的。代码和更多结果可以在我们的项目页面上找到:https://cassiepython.github.io/nerfart/.
{"title":"NeRF-Art: Text-Driven Neural Radiance Fields Stylization","authors":"Can Wang, Ruixia Jiang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao","doi":"10.48550/arXiv.2212.08070","DOIUrl":"https://doi.org/10.48550/arXiv.2212.08070","url":null,"abstract":"As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains challenging, especially in simulating a text-guided style with both the appearance and the geometry altered simultaneously. In this paper, we present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt. Unlike previous approaches that either lack sufficient geometry deformations and texture details or require meshes to guide the stylization, our method can shift a 3D scene to the target style characterized by desired geometry and appearance variations without any mesh guidance. This is achieved by introducing a novel global-local contrastive learning strategy, combined with the directional constraint to simultaneously control both the trajectory and the strength of the target style. Moreover, we adopt a weight regularization method to effectively suppress cloudy artifacts and geometry noises which arise easily when the density field is transformed during geometry stylization. Through extensive experiments on various styles, we demonstrate that our method is effective and robust regarding both single-view stylization quality and cross-view consistency. The code and more results can be found on our project page: https://cassiepython.github.io/nerfart/.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44448914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
What's the Situation with Intelligent Mesh Generation: A Survey and Perspectives 智能网格生成的现状与展望
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-11-11 DOI: 10.48550/arXiv.2211.06009
Zezeng Li, Zebin Xu, Ying Li, X. Gu, Na Lei
Intelligent Mesh Generation (IMG) represents a novel and promising field of research, utilizing machine learning techniques to generate meshes. Despite its relative infancy, IMG has significantly broadened the adaptability and practicality of mesh generation techniques, delivering numerous breakthroughs and unveiling potential future pathways. However, a noticeable void exists in the contemporary literature concerning comprehensive surveys of IMG methods. This paper endeavors to fill this gap by providing a systematic and thorough survey of the current IMG landscape. With a focus on 113 preliminary IMG methods, we undertake a meticulous analysis from various angles, encompassing core algorithm techniques and their application scope, agent learning objectives, data types, targeted challenges, as well as advantages and limitations. We have curated and categorized the literature, proposing three unique taxonomies based on key techniques, output mesh unit elements, and relevant input data types. This paper also underscores several promising future research directions and challenges in IMG. To augment reader accessibility, a dedicated IMG project page is available at https://github.com/xzb030/IMG_Survey.
智能网格生成(IMG)是一个新颖而有前途的研究领域,利用机器学习技术生成网格。尽管IMG还处于起步阶段,但它显著拓宽了网格生成技术的适应性和实用性,带来了许多突破,并揭示了未来的潜在途径。然而,在当代文献中,关于IMG方法的全面调查存在着明显的空白。本文试图通过对当前IMG景观进行系统而彻底的调查来填补这一空白。我们重点研究了113种初步的IMG方法,从各个角度进行了细致的分析,包括核心算法技术及其应用范围、代理学习目标、数据类型、有针对性的挑战以及优势和局限性。我们对文献进行了整理和分类,根据关键技术、输出网格单元元素和相关输入数据类型提出了三种独特的分类法。本文还强调了IMG未来几个有前景的研究方向和挑战。为了增加读者的可访问性,IMG项目专用页面可在https://github.com/xzb030/IMG_Survey.
{"title":"What's the Situation with Intelligent Mesh Generation: A Survey and Perspectives","authors":"Zezeng Li, Zebin Xu, Ying Li, X. Gu, Na Lei","doi":"10.48550/arXiv.2211.06009","DOIUrl":"https://doi.org/10.48550/arXiv.2211.06009","url":null,"abstract":"Intelligent Mesh Generation (IMG) represents a novel and promising field of research, utilizing machine learning techniques to generate meshes. Despite its relative infancy, IMG has significantly broadened the adaptability and practicality of mesh generation techniques, delivering numerous breakthroughs and unveiling potential future pathways. However, a noticeable void exists in the contemporary literature concerning comprehensive surveys of IMG methods. This paper endeavors to fill this gap by providing a systematic and thorough survey of the current IMG landscape. With a focus on 113 preliminary IMG methods, we undertake a meticulous analysis from various angles, encompassing core algorithm techniques and their application scope, agent learning objectives, data types, targeted challenges, as well as advantages and limitations. We have curated and categorized the literature, proposing three unique taxonomies based on key techniques, output mesh unit elements, and relevant input data types. This paper also underscores several promising future research directions and challenges in IMG. To augment reader accessibility, a dedicated IMG project page is available at https://github.com/xzb030/IMG_Survey.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41797284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
GPA-Net: No-Reference Point Cloud Quality Assessment with Multi-task Graph Convolutional Network GPA-Net:基于多任务图卷积网络的无参考点云质量评估
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-10-29 DOI: 10.48550/arXiv.2210.16478
Ziyu Shan, Qi Yang, Rui Ye, Yujie Zhang, Yi Xu, Xiaozhong Xu, Shan Liu
With the rapid development of 3D vision, point cloud has become an increasingly popular 3D visual media content. Due to the irregular structure, point cloud has posed novel challenges to the related research, such as compression, transmission, rendering and quality assessment. In these latest researches, point cloud quality assessment (PCQA) has attracted wide attention due to its significant role in guiding practical applications, especially in many cases where the reference point cloud is unavailable. However, current no-reference metrics which based on prevalent deep neural network have apparent disadvantages. For example, to adapt to the irregular structure of point cloud, they require preprocessing such as voxelization and projection that introduce extra distortions, and the applied grid-kernel networks, such as Convolutional Neural Networks, fail to extract effective distortion-related features. Besides, they rarely consider the various distortion patterns and the philosophy that PCQA should exhibit shift, scaling, and rotation invariance. In this paper, we propose a novel no-reference PCQA metric named the Graph convolutional PCQA network (GPA-Net). To extract effective features for PCQA, we propose a new graph convolution kernel, i.e., GPAConv, which attentively captures the perturbation of structure and texture. Then, we propose the multi-task framework consisting of one main task (quality regression) and two auxiliary tasks (distortion type and degree predictions). Finally, we propose a coordinate normalization module to stabilize the results of GPAConv under shift, scale and rotation transformations. Experimental results on two independent databases show that GPA-Net achieves the best performance compared to the state-of-the-art no-reference PCQA metrics, even better than some full-reference metrics in some cases. The code is available at: https://github.com/Slowhander/GPA-Net.git.
随着三维视觉的快速发展,点云已经成为越来越受欢迎的三维视觉媒体内容。由于点云结构的不规则性,对压缩、传输、渲染和质量评估等相关研究提出了新的挑战。在这些最新研究中,点云质量评估(PCQA)因其在指导实际应用方面的重要作用而受到广泛关注,尤其是在许多没有参考点云的情况下。然而,目前基于流行的深度神经网络的无参考度量存在明显的缺点。例如,为了适应点云的不规则结构,它们需要预处理,如引入额外失真的体素化和投影,而应用的网格核网络,如卷积神经网络,无法提取有效的失真相关特征。此外,他们很少考虑各种失真模式和PCQA应该表现出移位、缩放和旋转不变性的哲学。在本文中,我们提出了一种新的无参考PCQA度量,称为图卷积PCQA网络(GPA-Net)。为了提取PCQA的有效特征,我们提出了一种新的图卷积核,即GPAConv,它可以专注地捕捉结构和纹理的扰动。然后,我们提出了由一个主任务(质量回归)和两个辅助任务(失真类型和程度预测)组成的多任务框架。最后,我们提出了一个坐标归一化模块来稳定GPAConv在移位、缩放和旋转变换下的结果。在两个独立数据库上的实验结果表明,与最先进的无参考PCQA指标相比,GPA-Net实现了最佳性能,在某些情况下甚至优于一些完全参考指标。该代码位于:https://github.com/Slowhander/GPA-Net.git.
{"title":"GPA-Net: No-Reference Point Cloud Quality Assessment with Multi-task Graph Convolutional Network","authors":"Ziyu Shan, Qi Yang, Rui Ye, Yujie Zhang, Yi Xu, Xiaozhong Xu, Shan Liu","doi":"10.48550/arXiv.2210.16478","DOIUrl":"https://doi.org/10.48550/arXiv.2210.16478","url":null,"abstract":"With the rapid development of 3D vision, point cloud has become an increasingly popular 3D visual media content. Due to the irregular structure, point cloud has posed novel challenges to the related research, such as compression, transmission, rendering and quality assessment. In these latest researches, point cloud quality assessment (PCQA) has attracted wide attention due to its significant role in guiding practical applications, especially in many cases where the reference point cloud is unavailable. However, current no-reference metrics which based on prevalent deep neural network have apparent disadvantages. For example, to adapt to the irregular structure of point cloud, they require preprocessing such as voxelization and projection that introduce extra distortions, and the applied grid-kernel networks, such as Convolutional Neural Networks, fail to extract effective distortion-related features. Besides, they rarely consider the various distortion patterns and the philosophy that PCQA should exhibit shift, scaling, and rotation invariance. In this paper, we propose a novel no-reference PCQA metric named the Graph convolutional PCQA network (GPA-Net). To extract effective features for PCQA, we propose a new graph convolution kernel, i.e., GPAConv, which attentively captures the perturbation of structure and texture. Then, we propose the multi-task framework consisting of one main task (quality regression) and two auxiliary tasks (distortion type and degree predictions). Finally, we propose a coordinate normalization module to stabilize the results of GPAConv under shift, scale and rotation transformations. Experimental results on two independent databases show that GPA-Net achieves the best performance compared to the state-of-the-art no-reference PCQA metrics, even better than some full-reference metrics in some cases. The code is available at: https://github.com/Slowhander/GPA-Net.git.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42738762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Explore Contextual Information for 3D Scene Graph Generation 探索3D场景图形生成的上下文信息
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-10-12 DOI: 10.48550/arXiv.2210.06240
Yu-An Liu, Chengjiang Long, Zhaoxuan Zhang, Bo Liu, Qiang Zhang, Baocai Yin, Xin Yang
3D scene graph generation (SGG) has been of high interest in computer vision. Although the accuracy of 3D SGG on coarse classification and single relation label has been gradually improved, the performance of existing works is still far from being perfect for fine-grained and multi-label situations. In this paper, we propose a framework fully exploring contextual information for the 3D SGG task, which attempts to satisfy the requirements of fine-grained entity class, multiple relation labels, and high accuracy simultaneously. Our proposed approach is composed of a Graph Feature Extraction module and a Graph Contextual Reasoning module, achieving appropriate information-redundancy feature extraction, structured organization, and hierarchical inferring. Our approach achieves superior or competitive performance over previous methods on the 3DSSG dataset, especially on the relationship prediction sub-task.
三维场景图生成(SGG)一直是计算机视觉领域的研究热点。虽然3D SGG在粗分类和单一关系标签上的准确率已经逐步提高,但现有作品在细粒度和多标签情况下的表现还远远不够完美。在本文中,我们为3D SGG任务提出了一个充分挖掘上下文信息的框架,该框架试图同时满足细粒度实体类、多关系标签和高精度的要求。我们提出的方法由图特征提取模块和图上下文推理模块组成,实现了适当的信息冗余特征提取、结构化组织和分层推理。我们的方法在3DSSG数据集上取得了优于或具有竞争力的性能,特别是在关系预测子任务上。
{"title":"Explore Contextual Information for 3D Scene Graph Generation","authors":"Yu-An Liu, Chengjiang Long, Zhaoxuan Zhang, Bo Liu, Qiang Zhang, Baocai Yin, Xin Yang","doi":"10.48550/arXiv.2210.06240","DOIUrl":"https://doi.org/10.48550/arXiv.2210.06240","url":null,"abstract":"3D scene graph generation (SGG) has been of high interest in computer vision. Although the accuracy of 3D SGG on coarse classification and single relation label has been gradually improved, the performance of existing works is still far from being perfect for fine-grained and multi-label situations. In this paper, we propose a framework fully exploring contextual information for the 3D SGG task, which attempts to satisfy the requirements of fine-grained entity class, multiple relation labels, and high accuracy simultaneously. Our proposed approach is composed of a Graph Feature Extraction module and a Graph Contextual Reasoning module, achieving appropriate information-redundancy feature extraction, structured organization, and hierarchical inferring. Our approach achieves superior or competitive performance over previous methods on the 3DSSG dataset, especially on the relationship prediction sub-task.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44168847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multi-User Redirected Walking in Separate Physical Spaces for Online VR Scenarios 在线VR场景中独立物理空间的多用户重定向行走
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-10-07 DOI: 10.48550/arXiv.2210.05356
Sen-Zhe Xu, Jia-Hong Liu, Miao Wang, Fang-Lue Zhang, Songhai Zhang
With the recent rise of Metaverse, online multiplayer VR applications are becoming increasingly prevalent worldwide. However, as multiple users are located in different physical environments, different reset frequencies and timings can lead to serious fairness issues for online collaborative/competitive VR applications. For the fairness of online VR apps/games, an ideal online RDW strategy must make the locomotion opportunities of different users equal, regardless of different physical environment layouts. The existing RDW methods lack the scheme to coordinate multiple users in different PEs, and thus have the issue of triggering too many resets for all the users under the locomotion fairness constraint. We propose a novel multi-user RDW method that is able to significantly reduce the overall reset number and give users a better immersive experience by providing a fair exploration. Our key idea is to first find out the "bottleneck" user that may cause all users to be reset and estimate the time to reset given the users' next targets, and then redirect all the users to favorable poses during that maximized bottleneck time to ensure the subsequent resets can be postponed as much as possible. More particularly, we develop methods to estimate the time of possibly encountering obstacles and the reachable area for a specific pose to enable the prediction of the next reset caused by any user. Our experiments and user study found that our method outperforms existing RDW methods in online VR applications.
随着最近Metaverse的兴起,在线多人虚拟现实应用在全球变得越来越普遍。然而,由于多个用户位于不同的物理环境中,不同的重置频率和时间可能会导致在线协作/竞争VR应用的严重公平性问题。为了保证在线VR应用/游戏的公平性,理想的在线RDW策略必须使不同用户的移动机会均等,无论物理环境布局如何。现有的RDW方法缺乏协调不同pe中多个用户的方案,因此在运动公平性约束下存在触发所有用户过多重置的问题。我们提出了一种新的多用户RDW方法,该方法能够显着减少总体重置次数,并通过提供公平的探索为用户提供更好的沉浸式体验。我们的关键思想是首先找出可能导致所有用户重置的“瓶颈”用户,并根据用户的下一个目标估计重置时间,然后在最大瓶颈时间内将所有用户重定向到有利的姿势,以确保后续重置可以尽可能推迟。更具体地说,我们开发了方法来估计可能遇到障碍物的时间和特定姿势的可到达区域,从而能够预测任何用户引起的下一次重置。我们的实验和用户研究发现,我们的方法在在线VR应用中优于现有的RDW方法。
{"title":"Multi-User Redirected Walking in Separate Physical Spaces for Online VR Scenarios","authors":"Sen-Zhe Xu, Jia-Hong Liu, Miao Wang, Fang-Lue Zhang, Songhai Zhang","doi":"10.48550/arXiv.2210.05356","DOIUrl":"https://doi.org/10.48550/arXiv.2210.05356","url":null,"abstract":"With the recent rise of Metaverse, online multiplayer VR applications are becoming increasingly prevalent worldwide. However, as multiple users are located in different physical environments, different reset frequencies and timings can lead to serious fairness issues for online collaborative/competitive VR applications. For the fairness of online VR apps/games, an ideal online RDW strategy must make the locomotion opportunities of different users equal, regardless of different physical environment layouts. The existing RDW methods lack the scheme to coordinate multiple users in different PEs, and thus have the issue of triggering too many resets for all the users under the locomotion fairness constraint. We propose a novel multi-user RDW method that is able to significantly reduce the overall reset number and give users a better immersive experience by providing a fair exploration. Our key idea is to first find out the \"bottleneck\" user that may cause all users to be reset and estimate the time to reset given the users' next targets, and then redirect all the users to favorable poses during that maximized bottleneck time to ensure the subsequent resets can be postponed as much as possible. More particularly, we develop methods to estimate the time of possibly encountering obstacles and the reachable area for a specific pose to enable the prediction of the next reset caused by any user. Our experiments and user study found that our method outperforms existing RDW methods in online VR applications.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46927440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
TraInterSim: Adaptive and Planning-Aware Hybrid-Driven Traffic Intersection Simulation TraInterSim:自适应和规划感知混合驱动交通交叉口仿真
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-10-03 DOI: 10.48550/arXiv.2210.08118
Pei Lv, Xinming Pei, Xinyu Ren, Yuzhen Zhang, Chaochao Li, Mingliang Xu
Traffic intersections are important scenes that can be seen almost everywhere in the traffic system. Currently, most simulation methods perform well at highways and urban traffic networks. In intersection scenarios, the challenge lies in the lack of clearly defined lanes, where agents with various motion plannings converge in the central area from different directions. Traditional model-based methods are difficult to drive agents to move realistically at intersections without enough predefined lanes, while data-driven methods often require a large amount of high-quality input data. Simultaneously, tedious parameter tuning is inevitable involved to obtain the desired simulation results. In this paper, we present a novel adaptive and planning-aware hybrid-driven method (TraInterSim) to simulate traffic intersection scenarios. Our hybrid-driven method combines an optimization-based data-driven scheme with a velocity continuity model. It guides the agent's movements using real-world data and can generate those behaviors not present in the input data. Our optimization method fully considers velocity continuity, desired speed, direction guidance, and planning-aware collision avoidance. Agents can perceive others' motion plannings and relative distances to avoid possible collisions. To preserve the individual flexibility of different agents, the parameters in our method are automatically adjusted during the simulation. TraInterSim can generate realistic behaviors of heterogeneous agents in different traffic intersection scenarios in interactive rates. Through extensive experiments as well as user studies, we validate the effectiveness and rationality of the proposed simulation method.
交通路口是交通系统中几乎随处可见的重要场景。目前,大多数模拟方法在高速公路和城市交通网络中表现良好。在交叉口场景中,挑战在于缺乏明确定义的车道,具有各种运动规划的代理从不同方向聚集在中心区域。传统的基于模型的方法很难在没有足够的预定义车道的情况下驱动代理在十字路口真实地移动,而数据驱动的方法通常需要大量高质量的输入数据。同时,为了获得所需的仿真结果,不可避免地需要进行繁琐的参数调整。在本文中,我们提出了一种新的自适应和规划感知混合驱动方法(TraInterSim)来模拟交通交叉口场景。我们的混合驱动方法将基于优化的数据驱动方案与速度连续性模型相结合。它使用真实世界的数据指导代理的移动,并可以生成输入数据中不存在的行为。我们的优化方法充分考虑了速度连续性、期望速度、方向引导和计划意识防撞。代理可以感知他人的运动计划和相对距离,以避免可能的碰撞。为了保持不同代理的个体灵活性,我们的方法中的参数在模拟过程中会自动调整。TraInterSim可以在交互速率下生成不同交通路口场景下异构代理的真实行为。通过大量的实验和用户研究,我们验证了所提出的模拟方法的有效性和合理性。
{"title":"TraInterSim: Adaptive and Planning-Aware Hybrid-Driven Traffic Intersection Simulation","authors":"Pei Lv, Xinming Pei, Xinyu Ren, Yuzhen Zhang, Chaochao Li, Mingliang Xu","doi":"10.48550/arXiv.2210.08118","DOIUrl":"https://doi.org/10.48550/arXiv.2210.08118","url":null,"abstract":"Traffic intersections are important scenes that can be seen almost everywhere in the traffic system. Currently, most simulation methods perform well at highways and urban traffic networks. In intersection scenarios, the challenge lies in the lack of clearly defined lanes, where agents with various motion plannings converge in the central area from different directions. Traditional model-based methods are difficult to drive agents to move realistically at intersections without enough predefined lanes, while data-driven methods often require a large amount of high-quality input data. Simultaneously, tedious parameter tuning is inevitable involved to obtain the desired simulation results. In this paper, we present a novel adaptive and planning-aware hybrid-driven method (TraInterSim) to simulate traffic intersection scenarios. Our hybrid-driven method combines an optimization-based data-driven scheme with a velocity continuity model. It guides the agent's movements using real-world data and can generate those behaviors not present in the input data. Our optimization method fully considers velocity continuity, desired speed, direction guidance, and planning-aware collision avoidance. Agents can perceive others' motion plannings and relative distances to avoid possible collisions. To preserve the individual flexibility of different agents, the parameters in our method are automatically adjusted during the simulation. TraInterSim can generate realistic behaviors of heterogeneous agents in different traffic intersection scenarios in interactive rates. Through extensive experiments as well as user studies, we validate the effectiveness and rationality of the proposed simulation method.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47313608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Visualization and Computer Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1