首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion Models 提示游戏模型:通过蒙面扩散模型的文本引导游戏模拟
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-12-05 DOI: 10.1145/3635705
Willi Menapace, Aliaksandr Siarohin, Stéphane Lathuilière, Panos Achlioptas, Vladislav Golyanik, Sergey Tulyakov, Elisa Ricci

Neural video game simulators emerged as powerful tools to generate and edit videos. Their idea is to represent games as the evolution of an environment’s state driven by the actions of its agents. While such a paradigm enables users to play a game action-by-action, its rigidity precludes more semantic forms of control. To overcome this limitation, we augment game models with prompts specified as a set of natural language actions and desired states. The result—a Promptable Game Model (PGM)—makes it possible for a user to play the game by prompting it with high- and low-level action sequences. Most captivatingly, our PGM unlocks the director’s mode, where the game is played by specifying goals for the agents in the form of a prompt. This requires learning “game AI”, encapsulated by our animation model, to navigate the scene using high-level constraints, play against an adversary, and devise a strategy to win a point. To render the resulting state, we use a compositional NeRF representation encapsulated in our synthesis model. To foster future research, we present newly collected, annotated and calibrated Tennis and Minecraft datasets. Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art. Our framework, data, and models are available at snap-research.github.io/promptable-game-models.

神经电子游戏模拟器成为生成和编辑视频的强大工具。他们的想法是将游戏呈现为环境状态的进化,这种进化是由代理的行为所驱动的。虽然这种模式能够让用户通过行动体验游戏,但它的刚性却阻碍了更多语义形式的控制。为了克服这个限制,我们用一组指定为自然语言动作和期望状态的提示来增强游戏模型。其结果是一个提示游戏模型(PGM),它使得用户可以通过提示高级别和低级别的动作序列来玩游戏。最吸引人的是,我们的PGM打开了导演模式,在这个模式中,玩家可以通过提示的形式为代理指定目标。这就需要学习“游戏AI”(游戏邦注:由我们的动画模型封装),使用高级约束在场景中导航,与对手对抗,并设计出赢得分数的策略。为了呈现结果状态,我们使用封装在合成模型中的合成NeRF表示。为了促进未来的研究,我们展示了新收集,注释和校准的网球和Minecraft数据集。我们的方法在渲染质量方面明显优于现有的神经电子游戏模拟器,并解锁了超出当前技术水平的应用程序。我们的框架、数据和模型可以在snap-research.github.io/promptable-game-models找到。
{"title":"Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion Models","authors":"Willi Menapace, Aliaksandr Siarohin, Stéphane Lathuilière, Panos Achlioptas, Vladislav Golyanik, Sergey Tulyakov, Elisa Ricci","doi":"10.1145/3635705","DOIUrl":"https://doi.org/10.1145/3635705","url":null,"abstract":"<p>Neural video game simulators emerged as powerful tools to generate and edit videos. Their idea is to represent games as the evolution of an environment’s state driven by the actions of its agents. While such a paradigm enables users to <i>play</i> a game action-by-action, its rigidity precludes more semantic forms of control. To overcome this limitation, we augment game models with <i>prompts</i> specified as a set of <i>natural language</i> actions and <i>desired states</i>. The result—a Promptable Game Model (PGM)—makes it possible for a user to <i>play</i> the game by prompting it with high- and low-level action sequences. Most captivatingly, our PGM unlocks the <i>director’s mode</i>, where the game is played by specifying goals for the agents in the form of a prompt. This requires learning “game AI”, encapsulated by our animation model, to navigate the scene using high-level constraints, play against an adversary, and devise a strategy to win a point. To render the resulting state, we use a compositional NeRF representation encapsulated in our synthesis model. To foster future research, we present newly collected, annotated and calibrated Tennis and Minecraft datasets. Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art. Our framework, data, and models are available at snap-research.github.io/promptable-game-models.</p>","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"11 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138544775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and Manipulation 神经小波域扩散三维形状生成,反演和操作
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-12-01 DOI: 10.1145/3635304
Jingyu Hu, Ka-Hei Hui, Zhengzhe Liu, Ruihui Li, Chi-Wing Fu

This paper presents a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain. Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets. Then, we design a pair of neural networks: a diffusion-based generator to produce diverse shapes in the form of the coarse coefficient volumes and a detail predictor to produce compatible detail coefficient volumes for introducing fine structures and details. Further, we may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations. Both quantitative and qualitative experimental results manifest the compelling shape generation, inversion, and manipulation capabilities of our approach over the state-of-the-art methods.

本文提出了一种基于小波域连续隐式表示的直接生成建模方法,用于三维形状的生成、反演和处理。具体来说,我们提出了一种紧凑的小波表示,其中包含一对粗糙和细节系数体积,通过截断符号距离函数和多尺度双正交小波隐式地表示三维形状。然后,我们设计了一对神经网络:一个基于扩散的生成器以粗系数体积的形式产生各种形状,一个细节预测器以产生兼容的细节系数体积来引入精细结构和细节。此外,我们可以联合训练编码器网络来学习反转形状的潜在空间,使我们能够实现丰富多样的整体形状和区域感知形状操作。定量和定性实验结果都表明,我们的方法比最先进的方法具有令人信服的形状生成、反演和操作能力。
{"title":"Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and Manipulation","authors":"Jingyu Hu, Ka-Hei Hui, Zhengzhe Liu, Ruihui Li, Chi-Wing Fu","doi":"10.1145/3635304","DOIUrl":"https://doi.org/10.1145/3635304","url":null,"abstract":"<p>This paper presents a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain. Specifically, we propose a <i>compact wavelet representation</i> with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets. Then, we design a pair of neural networks: a diffusion-based <i>generator</i> to produce diverse shapes in the form of the coarse coefficient volumes and a <i>detail predictor</i> to produce compatible detail coefficient volumes for introducing fine structures and details. Further, we may jointly train an <i>encoder network</i> to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations. Both quantitative and qualitative experimental results manifest the compelling shape generation, inversion, and manipulation capabilities of our approach over the state-of-the-art methods.</p>","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":" 14","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138473490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Haisor: Human-Aware Indoor Scene Optimization via Deep Reinforcement Learning Haisor:基于深度强化学习的人类感知室内场景优化
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-18 DOI: 10.1145/3632947
Jia-Mu Sun, Jie Yang, Kaichun Mo, Yu-Kun Lai, Leonidas Guibas, Lin Gao

3D scene synthesis facilitates and benefits many real-world applications. Most scene generators focus on making indoor scenes plausible via learning from training data and leveraging extra constraints such as adjacency and symmetry. Although the generated 3D scenes are mostly plausible with visually realistic layouts, they can be functionally unsuitable for human users to navigate and interact with furniture. Our key observation is that human activity plays a critical role and sufficient free space is essential for human-scene interactions. This is exactly where many existing synthesized scenes fail – the seemingly correct layouts are often not fit for living. To tackle this, we present a human-aware optimization framework Haisor for 3D indoor scene arrangement via reinforcement learning, which aims to find an action sequence to optimize the indoor scene layout automatically. Based on the hierarchical scene graph representation, an optimal action sequence is predicted and performed via Deep Q-Learning with Monte Carlo Tree Search (MCTS), where MCTS is our key feature to search for the optimal solution in long-term sequences and large action space. Multiple human-aware rewards are designed as our core criteria of human-scene interaction, aiming to identify the next smart action by leveraging powerful reinforcement learning. Our framework is optimized end-to-end by giving the indoor scenes with part-level furniture layout including part mobility information. Furthermore, our methodology is extensible and allows utilizing different reward designs to achieve personalized indoor scene synthesis. Extensive experiments demonstrate that our approach optimizes the layout of 3D indoor scenes in a human-aware manner, which is more realistic and plausible than original state-of-the-art generator results, and our approach produces superior smart actions, outperforming alternative baselines.

3D场景合成有利于许多现实世界的应用。大多数场景生成器专注于通过从训练数据中学习和利用额外的约束(如邻接性和对称性)来使室内场景可信。虽然生成的3D场景大多具有视觉逼真的布局,但它们在功能上可能不适合人类用户导航和与家具交互。我们的主要观察是,人类活动起着至关重要的作用,足够的自由空间对于人与场景的互动至关重要。这正是许多现有的合成场景失败的地方——看似正确的布局往往不适合生活。为了解决这个问题,我们提出了一个基于强化学习的人类感知优化框架Haisor,该框架旨在寻找一个自动优化室内场景布局的动作序列。基于分层场景图表示,通过深度q学习与蒙特卡罗树搜索(MCTS)预测并执行最优动作序列,其中MCTS是我们在长期序列和大动作空间中搜索最优解的关键特征。多种人类感知奖励被设计为人类场景交互的核心标准,旨在通过利用强大的强化学习来识别下一个智能动作。我们的框架通过提供包含部分移动信息的部分级家具布局的室内场景来进行端到端的优化。此外,我们的方法是可扩展的,并允许使用不同的奖励设计来实现个性化的室内场景合成。大量的实验表明,我们的方法以一种人类感知的方式优化了3D室内场景的布局,这比原始的最先进的生成器结果更真实、更可信,并且我们的方法产生了卓越的智能动作,优于其他基线。
{"title":"Haisor: Human-Aware Indoor Scene Optimization via Deep Reinforcement Learning","authors":"Jia-Mu Sun, Jie Yang, Kaichun Mo, Yu-Kun Lai, Leonidas Guibas, Lin Gao","doi":"10.1145/3632947","DOIUrl":"https://doi.org/10.1145/3632947","url":null,"abstract":"<p>3D scene synthesis facilitates and benefits many real-world applications. Most scene generators focus on making indoor scenes plausible via learning from training data and leveraging extra constraints such as adjacency and symmetry. Although the generated 3D scenes are mostly plausible with visually realistic layouts, they can be functionally unsuitable for human users to navigate and interact with furniture. Our key observation is that human activity plays a critical role and sufficient free space is essential for human-scene interactions. This is exactly where many existing synthesized scenes fail – the seemingly correct layouts are often not fit for living. To tackle this, we present a human-aware optimization framework <span>Haisor</span> for 3D indoor scene arrangement via reinforcement learning, which aims to find an action sequence to optimize the indoor scene layout automatically. Based on the hierarchical scene graph representation, an optimal action sequence is predicted and performed via Deep Q-Learning with Monte Carlo Tree Search (MCTS), where MCTS is our key feature to search for the optimal solution in long-term sequences and large action space. Multiple human-aware rewards are designed as our core criteria of human-scene interaction, aiming to identify the next smart action by leveraging powerful reinforcement learning. Our framework is optimized end-to-end by giving the indoor scenes with part-level furniture layout including part mobility information. Furthermore, our methodology is extensible and allows utilizing different reward designs to achieve personalized indoor scene synthesis. Extensive experiments demonstrate that our approach optimizes the layout of 3D indoor scenes in a human-aware manner, which is more realistic and plausible than original state-of-the-art generator results, and our approach produces superior smart actions, outperforming alternative baselines.</p>","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"86 19","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138438944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In the Quest for Scale-Optimal Mappings 在尺度最优映射的探索中
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-17 DOI: 10.1145/3627102
Vladimir Garanzha, Igor Kaporin, Liudmila Kudryavtseva, Francois Protais, Dmitry Sokolov

Optimal mapping is one of the longest-standing problems in computational mathematics. It is natural to measure the relative curve length error under map to assess its quality. The maximum of such error is called the quasi-isometry constant, and its minimization is a nontrivial max-norm optimization problem. We present a physics-based quasi-isometric stiffening (QIS) algorithm for the max-norm minimization of hyperelastic distortion.

QIS perfectly equidistributes distortion over the entire domain for the ground truth test (unit hemisphere flattening) and, when it is not possible, tends to create zones where all cells have the same distortion. Such zones correspond to fragments of elastic material that became rigid under stiffening, reaching the deformation limit. As such, maps built by QIS are related to the de Boor equidistribution principle, which asks for an integral of a certain error indicator function to be the same over each mesh cell.

Under certain assumptions on the minimization toolbox, we prove that our method can build, in a finite number of steps, a deformation whose maximum distortion is arbitrarily close to the (unknown) minimum. We performed extensive testing: on more than 10,000 domains QIS was reliably better than the competing methods. In summary, we reliably build 2D and 3D mesh deformations with the smallest known distortion estimates for very stiff problems.

最优映射是计算数学中存在时间最长的问题之一。通过测量地图下的相对曲线长度误差来评价地图的质量是很自然的。这种误差的最大值称为准等距常数,其最小化是一个非平凡的最大范数优化问题。提出了一种基于物理的准等距强化(QIS)算法,用于超弹性变形的最大范数最小化。QIS在地面真值测试(单位半球平坦化)的整个域上完美地均匀分布失真,并且,当它不可能时,倾向于创建所有单元具有相同失真的区域。这些区域对应于弹性材料的碎片,在加强下变得刚性,达到变形极限。因此,由QIS构建的映射与de Boor均分原理有关,该原理要求在每个网格单元上某个误差指示函数的积分相同。在最小化工具箱的某些假设下,我们证明了我们的方法可以在有限的步骤中构建其最大失真任意接近(未知)最小值的变形。我们进行了广泛的测试:在超过10,000个域上,QIS可靠地优于竞争方法。总之,我们可靠地建立2D和3D网格变形与最小的已知失真估计非常僵硬的问题。
{"title":"In the Quest for Scale-Optimal Mappings","authors":"Vladimir Garanzha, Igor Kaporin, Liudmila Kudryavtseva, Francois Protais, Dmitry Sokolov","doi":"10.1145/3627102","DOIUrl":"https://doi.org/10.1145/3627102","url":null,"abstract":"<p>Optimal mapping is one of the longest-standing problems in computational mathematics. It is natural to measure the relative curve length error under map to assess its quality. The maximum of such error is called the quasi-isometry constant, and its minimization is a nontrivial max-norm optimization problem. We present a physics-based quasi-isometric stiffening (QIS) algorithm for the max-norm minimization of hyperelastic distortion. </p><p>QIS perfectly equidistributes distortion over the entire domain for the ground truth test (unit hemisphere flattening) and, when it is not possible, tends to create zones where all cells have the same distortion. Such zones correspond to fragments of elastic material that became rigid under stiffening, reaching the deformation limit. As such, maps built by QIS are related to the de Boor equidistribution principle, which asks for an integral of a certain error indicator function to be the same over each mesh cell. </p><p>Under certain assumptions on the minimization toolbox, we prove that our method can build, in a finite number of steps, a deformation whose maximum distortion is arbitrarily close to the (unknown) minimum. We performed extensive testing: on more than 10,000 domains QIS was reliably better than the competing methods. In summary, we reliably build 2D and 3D mesh deformations with the smallest known distortion estimates for very stiff problems.</p>","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"86 23","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138438943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital 3D Smocking Design 数字3D工作服设计
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-16 DOI: 10.1145/3631945
Jing Ren, Aviv Segall, Olga Sorkine-Hornung

We develop an optimization-based method to model smocking, a surface embroidery technique that provides decorative geometric texturing while maintaining stretch properties of the fabric. During smocking, multiple pairs of points on the fabric are stitched together, creating non-manifold geometric features and visually pleasing textures. Designing smocking patterns is challenging, because the outcome of stitching is unpredictable: the final texture is often revealed only when the whole smocking process is completed, necessitating painstaking physical fabrication and time consuming trial-and-error experimentation. This motivates us to seek a digital smocking design method. Straightforward attempts to compute smocked fabric geometry using surface deformation or cloth simulation methods fail to produce realistic results, likely due to the intricate structure of the designs, the large number of contacts and high-curvature folds. We instead formulate smocking as a graph embedding and shape deformation problem. We extract a coarse graph representing the fabric and the stitching constraints, and then derive the graph structure of the smocked result. We solve for the 3D embedding of this graph, which in turn reliably guides the deformation of the high-resolution fabric mesh. Our optimization based method is simple, efficient, and flexible, which allows us to build an interactive system for smocking pattern exploration. To demonstrate the accuracy of our method, we compare our results to real fabrications on a large set of smocking patterns.

我们开发了一种基于优化的方法来模拟罩衫,一种表面刺绣技术,在保持织物拉伸性能的同时提供装饰性几何纹理。在缝制过程中,织物上的多对点被缝合在一起,创造出非流形的几何特征和视觉上令人愉悦的纹理。设计罩衫的样式是具有挑战性的,因为缝制的结果是不可预测的:最终的纹理往往只有在整个罩衫过程完成后才能显现出来,这需要艰苦的物理制作和耗时的反复试验。这促使我们寻求一种数字化的工作服设计方法。使用表面变形或布料模拟方法计算罩衫织物几何形状的直接尝试无法产生逼真的结果,可能是由于设计的复杂结构,大量接触和高曲率褶皱。我们将smocking表述为一个图嵌入和形状变形问题。我们提取了一个表示织物和拼接约束的粗图,然后导出了被裁剪结果的图结构。我们求解了该图的三维嵌入,从而可靠地指导高分辨率织物网格的变形。基于优化的方法简单、高效、灵活,可以构建一个用于烟纹探索的交互式系统。为了证明我们方法的准确性,我们将我们的结果与大量吸烟模式的实际制造进行了比较。
{"title":"Digital 3D Smocking Design","authors":"Jing Ren, Aviv Segall, Olga Sorkine-Hornung","doi":"10.1145/3631945","DOIUrl":"https://doi.org/10.1145/3631945","url":null,"abstract":"<p>We develop an optimization-based method to model <i>smocking</i>, a surface embroidery technique that provides decorative geometric texturing while maintaining stretch properties of the fabric. During smocking, multiple pairs of points on the fabric are stitched together, creating non-manifold geometric features and visually pleasing textures. Designing smocking patterns is challenging, because the outcome of stitching is unpredictable: the final texture is often revealed only when the whole smocking process is completed, necessitating painstaking physical fabrication and time consuming trial-and-error experimentation. This motivates us to seek a digital smocking design method. Straightforward attempts to compute smocked fabric geometry using surface deformation or cloth simulation methods fail to produce realistic results, likely due to the intricate structure of the designs, the large number of contacts and high-curvature folds. We instead formulate smocking as a graph embedding and shape deformation problem. We extract a coarse graph representing the fabric and the stitching constraints, and then derive the graph structure of the smocked result. We solve for the 3D embedding of this graph, which in turn reliably guides the deformation of the high-resolution fabric mesh. Our optimization based method is simple, efficient, and flexible, which allows us to build an interactive system for smocking pattern exploration. To demonstrate the accuracy of our method, we compare our results to real fabrications on a large set of smocking patterns.</p>","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"87 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138438942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implicit Surface Tension for SPH Fluid Simulation SPH流体模拟的隐式表面张力
1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-07 DOI: 10.1145/3631936
Stefan Rhys Jeske, Lukas Westhofen, Fabian Löschner, José Antonio Fernández-Fernández, Jan Bender
The numerical simulation of surface tension is an active area of research in many different fields of application and has been attempted using a wide range of methods. Our contribution is the derivation and implementation of an implicit cohesion force based approach for the simulation of surface tension effects using the Smoothed Particle Hydrodynamics (SPH) method. We define a continuous formulation inspired by the properties of surface tension at the molecular scale which is spatially discretized using SPH. An adapted variant of the linearized backward Euler method is used for time discretization, which we also strongly couple with an implicit viscosity model. Finally, we extend our formulation with adhesion forces for interfaces with rigid objects. Existing SPH approaches for surface tension in computer graphics are mostly based on explicit time integration, thereby lacking in stability for challenging settings. We compare our implicit surface tension method to these approaches and further evaluate our model on a wider variety of complex scenarios, showcasing its efficacy and versatility. Among others, these include but are not limited to simulations of a water crown, a dripping faucet and a droplet-toy.
表面张力的数值模拟是一个活跃的研究领域,在许多不同的应用领域,已经尝试使用各种各样的方法。我们的贡献是推导和实现了一种基于隐式内聚力的方法,用于使用光滑粒子流体动力学(SPH)方法模拟表面张力效应。我们定义了一个受分子尺度表面张力特性启发的连续公式,该公式使用SPH进行空间离散。采用一种自适应的线性化后向欧拉方法进行时间离散化,并与隐式粘度模型强耦合。最后,我们将我们的公式扩展为具有刚性物体的界面的附着力。现有的计算机图形学表面张力的SPH方法大多基于显式时间积分,因此在具有挑战性的环境中缺乏稳定性。我们将隐式表面张力方法与这些方法进行比较,并在更广泛的复杂场景下进一步评估我们的模型,展示其有效性和通用性。其中包括但不限于模拟水冠、滴水龙头和小水滴玩具。
{"title":"Implicit Surface Tension for SPH Fluid Simulation","authors":"Stefan Rhys Jeske, Lukas Westhofen, Fabian Löschner, José Antonio Fernández-Fernández, Jan Bender","doi":"10.1145/3631936","DOIUrl":"https://doi.org/10.1145/3631936","url":null,"abstract":"The numerical simulation of surface tension is an active area of research in many different fields of application and has been attempted using a wide range of methods. Our contribution is the derivation and implementation of an implicit cohesion force based approach for the simulation of surface tension effects using the Smoothed Particle Hydrodynamics (SPH) method. We define a continuous formulation inspired by the properties of surface tension at the molecular scale which is spatially discretized using SPH. An adapted variant of the linearized backward Euler method is used for time discretization, which we also strongly couple with an implicit viscosity model. Finally, we extend our formulation with adhesion forces for interfaces with rigid objects. Existing SPH approaches for surface tension in computer graphics are mostly based on explicit time integration, thereby lacking in stability for challenging settings. We compare our implicit surface tension method to these approaches and further evaluate our model on a wider variety of complex scenarios, showcasing its efficacy and versatility. Among others, these include but are not limited to simulations of a water crown, a dripping faucet and a droplet-toy.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"277 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135474947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latent L-systems: Transformer-based Tree Generator 潜在的l系统:基于变压器的树生成器
1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-02 DOI: 10.1145/3627101
Jae Joong Lee, Bosheng Li, Bedrich Benes
We show how a Transformer can encode hierarchical tree-like string structures by introducing a new deep learning-based framework for generating 3D biological tree models represented as Lindenmayer system (L-system) strings. L-systems are string-rewriting procedural systems that encode tree topology and geometry. L-systems are efficient, but creating the production rules is one of the most critical problems precluding their usage in practice. We substitute the procedural rules creation with a deep neural model. Instead of writing the rules, we train a deep neural model that produces the output strings. We train our model on 155k tree geometries that are encoded as L-strings, de-parameterized, and converted to a hierarchy of linear sequences corresponding to branches. An end-to-end deep learning model with an attention mechanism then learns the distributions of geometric operations and branches from the input, effectively replacing the L-system rewriting rule generation. The trained deep model generates new L-strings representing 3D tree models in the same way L-systems do by providing the starting string. Our model allows for the generation of a wide variety of new trees, and the deep model agrees with the input by 93.7% in branching angles, 97.2% in branch lengths, and 92.3% in an extracted list of geometric features. We also validate the generated trees using perceptual metrics showing 97% agreement with input geometric models.
我们通过引入一种新的基于深度学习的框架来生成表示为林登梅尔系统(l -系统)字符串的3D生物树模型,展示了Transformer如何编码分层树状字符串结构。l系统是字符串重写过程系统,它对树的拓扑结构和几何结构进行编码。l -系统是高效的,但是创建生产规则是妨碍它们在实践中使用的最关键的问题之一。我们用深度神经模型代替程序规则的创建。我们不是编写规则,而是训练一个产生输出字符串的深度神经模型。我们在155k树几何形状上训练我们的模型,这些几何形状被编码为l字符串,去参数化,并转换为对应分支的线性序列层次结构。然后,一个具有注意机制的端到端深度学习模型从输入中学习几何运算和分支的分布,有效地取代l -系统重写规则生成。经过训练的深度模型生成新的l -字符串,表示3D树模型,与l -系统提供起始字符串的方式相同。我们的模型允许生成各种各样的新树,深度模型在分支角度上与输入的一致性为93.7%,在分支长度上为97.2%,在提取的几何特征列表上为92.3%。我们还使用感知度量来验证生成的树,显示与输入几何模型的一致性为97%。
{"title":"Latent L-systems: Transformer-based Tree Generator","authors":"Jae Joong Lee, Bosheng Li, Bedrich Benes","doi":"10.1145/3627101","DOIUrl":"https://doi.org/10.1145/3627101","url":null,"abstract":"We show how a Transformer can encode hierarchical tree-like string structures by introducing a new deep learning-based framework for generating 3D biological tree models represented as Lindenmayer system (L-system) strings. L-systems are string-rewriting procedural systems that encode tree topology and geometry. L-systems are efficient, but creating the production rules is one of the most critical problems precluding their usage in practice. We substitute the procedural rules creation with a deep neural model. Instead of writing the rules, we train a deep neural model that produces the output strings. We train our model on 155k tree geometries that are encoded as L-strings, de-parameterized, and converted to a hierarchy of linear sequences corresponding to branches. An end-to-end deep learning model with an attention mechanism then learns the distributions of geometric operations and branches from the input, effectively replacing the L-system rewriting rule generation. The trained deep model generates new L-strings representing 3D tree models in the same way L-systems do by providing the starting string. Our model allows for the generation of a wide variety of new trees, and the deep model agrees with the input by 93.7% in branching angles, 97.2% in branch lengths, and 92.3% in an extracted list of geometric features. We also validate the generated trees using perceptual metrics showing 97% agreement with input geometric models.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"60 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135875314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Layout-Aware Single-Image Document Flattening 布局感知单图像文档扁平化
1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-02 DOI: 10.1145/3627818
Pu Li, Weize Quan, Jianwei Guo, Dong-Ming Yan
Single image rectification of document deformation is a challenging task. Although some recent deep learning-based methods have attempted to solve this problem, they cannot achieve satisfactory results when dealing with document images with complex deformations. In this article, we propose a new efficient framework for document flattening. Our main insight is that most layout primitives in a document have rectangular outline shapes, making unwarping local layout primitives essentially homogeneous with unwarping the entire document. The former task is clearly more straightforward to solve than the latter due to the more consistent texture and relatively smooth deformation. On this basis, we propose a layout-aware deep model working in a divide-and-conquer manner. First, we employ a transformer-based segmentation module to obtain the layout information of the input document. Then a new regression module is applied to predict the global and local UV maps. Finally, we design an effective merging algorithm to correct the global prediction with local details. Both quantitative and qualitative experimental results demonstrate that our framework achieves favorable performance against state-of-the-art methods. In addition, the current publicly available document flattening datasets have limited 3D paper shapes without layout annotation and also lack a general geometric correction metric. Therefore, we build a new large-scale synthetic dataset by utilizing a fully automatic rendering method to generate deformed documents with diverse shapes and exact layout segmentation labels. We also propose a new geometric correction metric based on our paired document UV maps. Code and dataset will be released at https://github.com/BunnySoCrazy/LA-DocFlatten .
单幅图像的文档变形校正是一项具有挑战性的任务。尽管最近一些基于深度学习的方法试图解决这个问题,但在处理具有复杂变形的文档图像时,它们无法取得令人满意的结果。在本文中,我们提出了一个新的有效的文档扁平化框架。我们的主要见解是,文档中的大多数布局原语都具有矩形轮廓形状,这使得取消弯曲的局部布局原语基本上与取消弯曲的整个文档是一致的。前者的任务显然比后者更容易解决,因为纹理更一致,变形相对平滑。在此基础上,我们提出了一种以分而治之的方式工作的布局感知深度模型。首先,我们使用一个基于变压器的分割模块来获取输入文档的布局信息。然后应用一个新的回归模块对全局和局部UV贴图进行预测。最后,设计了一种有效的融合算法,利用局部细节对全局预测进行校正。定量和定性实验结果表明,我们的框架在最先进的方法下取得了良好的性能。此外,目前公开的文档平坦化数据集在没有布局注释的情况下具有有限的3D纸张形状,并且还缺乏通用的几何校正度量。因此,我们利用一种全自动渲染方法来生成具有不同形状和精确布局分割标签的变形文档,从而构建了一个新的大规模合成数据集。我们还提出了一种新的基于我们的配对文档UV地图的几何校正度量。代码和数据集将在https://github.com/BunnySoCrazy/LA-DocFlatten上发布。
{"title":"Layout-Aware Single-Image Document Flattening","authors":"Pu Li, Weize Quan, Jianwei Guo, Dong-Ming Yan","doi":"10.1145/3627818","DOIUrl":"https://doi.org/10.1145/3627818","url":null,"abstract":"Single image rectification of document deformation is a challenging task. Although some recent deep learning-based methods have attempted to solve this problem, they cannot achieve satisfactory results when dealing with document images with complex deformations. In this article, we propose a new efficient framework for document flattening. Our main insight is that most layout primitives in a document have rectangular outline shapes, making unwarping local layout primitives essentially homogeneous with unwarping the entire document. The former task is clearly more straightforward to solve than the latter due to the more consistent texture and relatively smooth deformation. On this basis, we propose a layout-aware deep model working in a divide-and-conquer manner. First, we employ a transformer-based segmentation module to obtain the layout information of the input document. Then a new regression module is applied to predict the global and local UV maps. Finally, we design an effective merging algorithm to correct the global prediction with local details. Both quantitative and qualitative experimental results demonstrate that our framework achieves favorable performance against state-of-the-art methods. In addition, the current publicly available document flattening datasets have limited 3D paper shapes without layout annotation and also lack a general geometric correction metric. Therefore, we build a new large-scale synthetic dataset by utilizing a fully automatic rendering method to generate deformed documents with diverse shapes and exact layout segmentation labels. We also propose a new geometric correction metric based on our paired document UV maps. Code and dataset will be released at https://github.com/BunnySoCrazy/LA-DocFlatten .","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"58 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135875325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disentangling Structure and Appearance in ViT Feature Space ViT特征空间中结构与外观的解缠
1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-01 DOI: 10.1145/3630096
Narek Tumanyan, Omer Bar-Tal, Shir Amir, Shai Bagon, Tali Dekel
We present a method for semantically transferring the visual appearance of one natural image to another. Specifically, our goal is to generate an image in which objects in a source structure image are “painted” with the visual appearance of their semantically related objects in a target appearance image. To integrate semantic information into our framework, our key idea is to leverage a pre-trained and fixed Vision Transformer (ViT) model. Specifically, we derive novel disentangled representations of structure and appearance extracted from deep ViT features. We then establish an objective function that splices the desired structure and appearance representations, interweaving them together in the space of ViT features. Based on our objective function, we propose two frameworks of semantic appearance transfer – “Splice”, which works by training a generator on a single and arbitrary pair of structure-appearance images, and “SpliceNet”, a feed-forward real-time appearance transfer model trained on a dataset of images from a specific domain . Our frameworks do not involve adversarial training, nor do they require any additional input information such as semantic segmentation or correspondences. We demonstrate high-resolution results on a variety of in-the-wild image pairs, under significant variations in the number of objects, pose, and appearance. Code and supplementary material are available in our project page: splice-vit.github.io.
我们提出了一种将一个自然图像的视觉外观语义转移到另一个自然图像的方法。具体来说,我们的目标是生成一个图像,其中源结构图像中的对象被“绘制”为目标外观图像中与其语义相关的对象的视觉外观。为了将语义信息集成到我们的框架中,我们的关键思想是利用预训练和固定的视觉转换器(ViT)模型。具体来说,我们从深度ViT特征中提取出新的结构和外观的解纠缠表示。然后,我们建立一个目标函数,将所需的结构和外观表示拼接在一起,将它们交织在ViT特征空间中。基于我们的目标函数,我们提出了两种语义外观转移框架——“Splice”和“SpliceNet”,前者通过在单个和任意一对结构外观图像上训练生成器来工作,后者是在特定领域的图像数据集上训练的前馈实时外观转移模型。我们的框架不涉及对抗性训练,也不需要任何额外的输入信息,如语义分割或对应。我们展示了各种野外图像对的高分辨率结果,在物体数量,姿势和外观的显著变化下。代码和补充材料可在我们的项目页面中获得:splice- vitc .github.io。
{"title":"Disentangling Structure and Appearance in ViT Feature Space","authors":"Narek Tumanyan, Omer Bar-Tal, Shir Amir, Shai Bagon, Tali Dekel","doi":"10.1145/3630096","DOIUrl":"https://doi.org/10.1145/3630096","url":null,"abstract":"We present a method for semantically transferring the visual appearance of one natural image to another. Specifically, our goal is to generate an image in which objects in a source structure image are “painted” with the visual appearance of their semantically related objects in a target appearance image. To integrate semantic information into our framework, our key idea is to leverage a pre-trained and fixed Vision Transformer (ViT) model. Specifically, we derive novel disentangled representations of structure and appearance extracted from deep ViT features. We then establish an objective function that splices the desired structure and appearance representations, interweaving them together in the space of ViT features. Based on our objective function, we propose two frameworks of semantic appearance transfer – “Splice”, which works by training a generator on a single and arbitrary pair of structure-appearance images, and “SpliceNet”, a feed-forward real-time appearance transfer model trained on a dataset of images from a specific domain . Our frameworks do not involve adversarial training, nor do they require any additional input information such as semantic segmentation or correspondences. We demonstrate high-resolution results on a variety of in-the-wild image pairs, under significant variations in the number of objects, pose, and appearance. Code and supplementary material are available in our project page: splice-vit.github.io.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"126 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data SparsePoser:基于稀疏数据的实时全身运动重建
1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-31 DOI: 10.1145/3625264
Jose Luis Ponton, Haoran Yun, Andreas Aristidou, Carlos Andujar, Nuria Pelechano
Accurate and reliable human motion reconstruction is crucial for creating natural interactions of full-body avatars in Virtual Reality (VR) and entertainment applications. As the Metaverse and social applications gain popularity, users are seeking cost-effective solutions to create full-body animations that are comparable in quality to those produced by commercial motion capture systems. In order to provide affordable solutions though, it is important to minimize the number of sensors attached to the subject’s body. Unfortunately, reconstructing the full-body pose from sparse data is a heavily under-determined problem. Some studies that use IMU sensors face challenges in reconstructing the pose due to positional drift and ambiguity of the poses. In recent years, some mainstream VR systems have released 6-degree-of-freedom (6-DoF) tracking devices providing positional and rotational information. Nevertheless, most solutions for reconstructing full-body poses rely on traditional inverse kinematics (IK) solutions, which often produce non-continuous and unnatural poses. In this article, we introduce SparsePoser, a novel deep learning-based solution for reconstructing a full-body pose from a reduced set of six tracking devices. Our system incorporates a convolutional-based autoencoder that synthesizes high-quality continuous human poses by learning the human motion manifold from motion capture data. Then, we employ a learned IK component, made of multiple lightweight feed-forward neural networks, to adjust the hands and feet toward the corresponding trackers. We extensively evaluate our method on publicly available motion capture datasets and with real-time live demos. We show that our method outperforms state-of-the-art techniques using IMU sensors or 6-DoF tracking devices, and can be used for users with different body dimensions and proportions.
在虚拟现实(VR)和娱乐应用中,准确可靠的人体运动重建对于创造全身化身的自然交互至关重要。随着Metaverse和社交应用程序的普及,用户正在寻求具有成本效益的解决方案来创建与商业动作捕捉系统制作的质量相当的全身动画。然而,为了提供经济实惠的解决方案,重要的是要尽量减少附着在受试者身体上的传感器数量。不幸的是,从稀疏数据中重建全身姿态是一个严重不确定的问题。由于姿态的位置漂移和模糊性,一些使用IMU传感器的研究在重建姿态时面临挑战。近年来,一些主流的VR系统已经发布了6自由度(6-DoF)跟踪设备,提供位置和旋转信息。然而,大多数重建全身姿态的解决方案依赖于传统的逆运动学(IK)解决方案,这往往产生不连续和不自然的姿态。在本文中,我们介绍了SparsePoser,这是一种基于深度学习的新型解决方案,用于从六个跟踪设备的简化集中重建全身姿势。我们的系统集成了一个基于卷积的自编码器,通过从动作捕捉数据中学习人体运动流形来合成高质量的连续人体姿势。然后,我们使用由多个轻量级前馈神经网络组成的学习IK组件来调整手和脚朝向相应的跟踪器。我们在公开可用的动作捕捉数据集和实时现场演示上广泛评估我们的方法。我们表明,我们的方法优于使用IMU传感器或6自由度跟踪设备的最先进技术,并且可以用于不同身体尺寸和比例的用户。
{"title":"SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data","authors":"Jose Luis Ponton, Haoran Yun, Andreas Aristidou, Carlos Andujar, Nuria Pelechano","doi":"10.1145/3625264","DOIUrl":"https://doi.org/10.1145/3625264","url":null,"abstract":"Accurate and reliable human motion reconstruction is crucial for creating natural interactions of full-body avatars in Virtual Reality (VR) and entertainment applications. As the Metaverse and social applications gain popularity, users are seeking cost-effective solutions to create full-body animations that are comparable in quality to those produced by commercial motion capture systems. In order to provide affordable solutions though, it is important to minimize the number of sensors attached to the subject’s body. Unfortunately, reconstructing the full-body pose from sparse data is a heavily under-determined problem. Some studies that use IMU sensors face challenges in reconstructing the pose due to positional drift and ambiguity of the poses. In recent years, some mainstream VR systems have released 6-degree-of-freedom (6-DoF) tracking devices providing positional and rotational information. Nevertheless, most solutions for reconstructing full-body poses rely on traditional inverse kinematics (IK) solutions, which often produce non-continuous and unnatural poses. In this article, we introduce SparsePoser, a novel deep learning-based solution for reconstructing a full-body pose from a reduced set of six tracking devices. Our system incorporates a convolutional-based autoencoder that synthesizes high-quality continuous human poses by learning the human motion manifold from motion capture data. Then, we employ a learned IK component, made of multiple lightweight feed-forward neural networks, to adjust the hands and feet toward the corresponding trackers. We extensively evaluate our method on publicly available motion capture datasets and with real-time live demos. We show that our method outperforms state-of-the-art techniques using IMU sensors or 6-DoF tracking devices, and can be used for users with different body dimensions and proportions.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"198 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135765706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1