首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
Evaluating Visual Perception of Object Motion in Dynamic Environments 评估动态环境中物体运动的视觉感知
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687912
Budmonde Duinkharjav, Jenna Kang, Gavin Stuart Peter Miller, Chang Xiao, Qi Sun
Precisely understanding how objects move in 3D is essential for broad scenarios such as video editing, gaming, driving, and athletics. With screen-displayed computer graphics content, users only perceive limited cues to judge the object motion from the on-screen optical flow. Conventionally, visual perception is studied with stationary settings and singular objects. However, in practical applications, we---the observer---also move within complex scenes. Therefore, we must extract object motion from a combined optical flow displayed on screen, which can often lead to mis-estimations due to perceptual ambiguities. We measure and model observers' perceptual accuracy of object motions in dynamic 3D environments, a universal but under-investigated scenario in computer graphics applications. We design and employ a crowdsourcing-based psychophysical study, quantifying the relationships among patterns of scene dynamics and content, and the resulting perceptual judgments of object motion direction. The acquired psychophysical data underpins a model for generalized conditions. We then demonstrate the model's guidance ability to significantly enhance users' understanding of task object motion in gaming and animation design. With applications in measuring and compensating for object motion errors in video and rendering, we hope the research establishes a new frontier for understanding and mitigating perceptual errors caused by the gap between screen-displayed graphics and the physical world.
准确理解物体在三维空间中的运动方式对于视频编辑、游戏、驾驶和运动等广泛的应用场景至关重要。对于屏幕显示的计算机图形内容,用户只能从屏幕光流中感知有限的线索来判断物体的运动。传统上,视觉感知的研究对象是静止的环境和单一的物体。然而,在实际应用中,我们--观察者--也会在复杂的场景中移动。因此,我们必须从屏幕上显示的组合光流中提取物体的运动,而这往往会因知觉模糊而导致错误估计。我们对观察者在动态三维环境中对物体运动的感知准确性进行了测量和建模,这是计算机图形应用中一个普遍但研究不足的场景。我们设计并采用了基于众包的心理物理研究,量化了场景动态和内容模式之间的关系,以及由此产生的对物体运动方向的感知判断。获得的心理物理数据为通用条件模型提供了基础。随后,我们展示了该模型的指导能力,可显著增强用户对游戏和动画设计中任务对象运动的理解。通过测量和补偿视频和渲染中的物体运动误差,我们希望这项研究能为理解和减轻屏幕显示图形与物理世界之间的差距所造成的感知误差开辟一个新的领域。
{"title":"Evaluating Visual Perception of Object Motion in Dynamic Environments","authors":"Budmonde Duinkharjav, Jenna Kang, Gavin Stuart Peter Miller, Chang Xiao, Qi Sun","doi":"10.1145/3687912","DOIUrl":"https://doi.org/10.1145/3687912","url":null,"abstract":"Precisely understanding how objects move in 3D is essential for broad scenarios such as video editing, gaming, driving, and athletics. With screen-displayed computer graphics content, users only perceive limited cues to judge the object motion from the on-screen optical flow. Conventionally, visual perception is studied with stationary settings and singular objects. However, in practical applications, we---the observer---also move within complex scenes. Therefore, we must extract object motion from a combined optical flow displayed on screen, which can often lead to mis-estimations due to perceptual ambiguities. We measure and model observers' perceptual accuracy of object motions in dynamic 3D environments, a universal but under-investigated scenario in computer graphics applications. We design and employ a crowdsourcing-based psychophysical study, quantifying the relationships among patterns of scene dynamics and content, and the resulting perceptual judgments of object motion direction. The acquired psychophysical data underpins a model for generalized conditions. We then demonstrate the model's guidance ability to significantly enhance users' understanding of task object motion in gaming and animation design. With applications in measuring and compensating for object motion errors in video and rendering, we hope the research establishes a new frontier for understanding and mitigating perceptual errors caused by the gap between screen-displayed graphics and the physical world.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"46 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StyleCrafter: Taming Artistic Video Diffusion with Reference-Augmented Adapter Learning StyleCrafter:通过参考增强适配器学习控制艺术视频扩散
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687975
Gongye Liu, Menghan Xia, Yong Zhang, Haoxin Chen, Jinbo Xing, Yibo Wang, Xintao Wang, Ying Shan, Yujiu Yang
Text-to-video (T2V) models have shown remarkable capabilities in generating diverse videos. However, they struggle to produce user-desired artistic videos due to (i) text's inherent clumsiness in expressing specific styles and (ii) the generally degraded style fidelity. To address these challenges, we introduce StyleCrafter, a generic method that enhances pretrained T2V models with a style control adapter, allowing video generation in any style by feeding a reference image. Considering the scarcity of artistic video data, we propose to first train a style control adapter using style-rich image datasets, then transfer the learned stylization ability to video generation through a tailor-made finetuning paradigm. To promote content-style disentanglement, we employ carefully designed data augmentation strategies to enhance decoupled learning. Additionally, we propose a scale-adaptive fusion module to balance the influences of text-based content features and image-based style features, which helps generalization across various text and style combinations. StyleCrafter efficiently generates high-quality stylized videos that align with the content of the texts and resemble the style of the reference images. Experiments demonstrate that our approach is more flexible and efficient than existing competitors. Project page: https://gongyeliu.github.io/StyleCrafter.github.io/
文本到视频(T2V)模型在生成多样化视频方面表现出了卓越的能力。然而,由于(i) 文本在表达特定风格时固有的笨拙性和(ii) 普遍降低的风格保真度,它们很难生成用户所需的艺术视频。为了应对这些挑战,我们引入了 StyleCrafter,这是一种通用方法,它利用风格控制适配器来增强预训练的 T2V 模型,通过提供参考图像来生成任何风格的视频。考虑到艺术视频数据的稀缺性,我们建议首先使用风格丰富的图像数据集训练风格控制适配器,然后通过量身定制的微调范式将所学的风格化能力转移到视频生成中。为了促进内容与风格的分离,我们采用了精心设计的数据增强策略来加强解耦学习。此外,我们还提出了一个规模自适应融合模块,以平衡基于文本的内容特征和基于图像的风格特征的影响,这有助于在各种文本和风格组合中实现泛化。StyleCrafter 能高效生成高质量的风格化视频,这些视频既与文本内容一致,又与参考图像的风格相似。实验证明,我们的方法比现有的竞争对手更灵活、更高效。项目页面: https://gongyeliu.github.io/StyleCrafter.github.io/
{"title":"StyleCrafter: Taming Artistic Video Diffusion with Reference-Augmented Adapter Learning","authors":"Gongye Liu, Menghan Xia, Yong Zhang, Haoxin Chen, Jinbo Xing, Yibo Wang, Xintao Wang, Ying Shan, Yujiu Yang","doi":"10.1145/3687975","DOIUrl":"https://doi.org/10.1145/3687975","url":null,"abstract":"Text-to-video (T2V) models have shown remarkable capabilities in generating diverse videos. However, they struggle to produce user-desired artistic videos due to (i) text's inherent clumsiness in expressing specific styles and (ii) the generally degraded style fidelity. To address these challenges, we introduce StyleCrafter, a generic method that enhances pretrained T2V models with a style control adapter, allowing video generation in any style by feeding a reference image. Considering the scarcity of artistic video data, we propose to first train a style control adapter using style-rich image datasets, then transfer the learned stylization ability to video generation through a tailor-made finetuning paradigm. To promote content-style disentanglement, we employ carefully designed data augmentation strategies to enhance decoupled learning. Additionally, we propose a scale-adaptive fusion module to balance the influences of text-based content features and image-based style features, which helps generalization across various text and style combinations. StyleCrafter efficiently generates high-quality stylized videos that align with the content of the texts and resemble the style of the reference images. Experiments demonstrate that our approach is more flexible and efficient than existing competitors. Project page: https://gongyeliu.github.io/StyleCrafter.github.io/","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"176 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Still-Moving: Customized Video Generation without Customized Video Data 静止不动:无需定制视频数据即可生成定制视频
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687945
Hila Chefer, Shiran Zada, Roni Paiss, Ariel Ephrat, Omer Tov, Michael Rubinstein, Lior Wolf, Tali Dekel, Tomer Michaeli, Inbar Mosseri
Customizing text-to-image (T2I) models has seen tremendous progress recently, particularly in areas such as personalization, stylization, and conditional generation. However, expanding this progress to video generation is still in its infancy, primarily due to the lack of customized video data. In this work, we introduce Still-Moving, a novel generic framework for customizing a text-to-video (T2V) model, without requiring any customized video data. The framework applies to the prominent T2V design where the video model is built over a T2I model (e.g., via inflation). We assume access to a customized version of the T2I model, trained only on still image data (e.g., using DreamBooth). Naively plugging in the weights of the customized T2I model into the T2V model often leads to significant artifacts or insufficient adherence to the customization data. To overcome this issue, we train lightweight Spatial Adapters that adjust the features produced by the injected T2I layers. Importantly, our adapters are trained on "frozen videos" (i.e., repeated images), constructed from image samples generated by the customized T2I model. This training is facilitated by a novel Motion Adapter module, which allows us to train on such static videos while preserving the motion prior of the video model. At test time, we remove the Motion Adapter modules and leave in only the trained Spatial Adapters. This restores the motion prior of the T2V model while adhering to the spatial prior of the customized T2I model. We demonstrate the effectiveness of our approach on diverse tasks including personalized, stylized, and conditional generation. In all evaluated scenarios, our method seamlessly integrates the spatial prior of the customized T2I model with a motion prior supplied by the T2V model.
定制文本到图像(T2I)模型最近取得了巨大进展,尤其是在个性化、风格化和条件生成等领域。然而,将这一进展扩展到视频生成仍处于起步阶段,这主要是由于缺乏定制的视频数据。在这项工作中,我们引入了一个新颖的通用框架--Still-Moving,用于定制文本到视频(T2V)模型,而不需要任何定制的视频数据。该框架适用于著名的 T2V 设计,其中视频模型建立在 T2I 模型之上(例如,通过膨胀)。我们假定可以访问定制版的 T2I 模型,该模型只在静态图像数据上进行训练(例如,使用 DreamBooth)。天真地将定制 T2I 模型的权重插入 T2V 模型往往会导致明显的伪影或与定制数据的一致性不足。为了克服这一问题,我们训练了轻量级空间适配器,以调整注入的 T2I 层产生的特征。重要的是,我们的适配器是在 "冻结视频"(即重复图像)上进行训练的,而 "冻结视频 "是由定制的 T2I 模型生成的图像样本构建的。新颖的运动适配器模块为这种训练提供了便利,它允许我们在这种静态视频上进行训练,同时保留视频模型的运动先验。测试时,我们会移除运动适配器模块,只保留经过训练的空间适配器。这就恢复了 T2V 模型的运动先验,同时遵循了定制 T2I 模型的空间先验。我们在个性化、风格化和条件生成等不同任务中展示了我们方法的有效性。在所有评估场景中,我们的方法将定制 T2I 模型的空间先验与 T2V 模型提供的运动先验进行了无缝整合。
{"title":"Still-Moving: Customized Video Generation without Customized Video Data","authors":"Hila Chefer, Shiran Zada, Roni Paiss, Ariel Ephrat, Omer Tov, Michael Rubinstein, Lior Wolf, Tali Dekel, Tomer Michaeli, Inbar Mosseri","doi":"10.1145/3687945","DOIUrl":"https://doi.org/10.1145/3687945","url":null,"abstract":"Customizing text-to-image (T2I) models has seen tremendous progress recently, particularly in areas such as personalization, stylization, and conditional generation. However, expanding this progress to video generation is still in its infancy, primarily due to the lack of customized video data. In this work, we introduce Still-Moving, a novel generic framework for customizing a text-to-video (T2V) model, without requiring any customized video data. The framework applies to the prominent T2V design where the video model is built over a T2I model (e.g., via inflation). We assume access to a customized version of the T2I model, trained only on still image data (e.g., using DreamBooth). Naively plugging in the weights of the customized T2I model into the T2V model often leads to significant artifacts or insufficient adherence to the customization data. To overcome this issue, we train lightweight <jats:italic>Spatial Adapters</jats:italic> that adjust the features produced by the injected T2I layers. Importantly, our adapters are trained on <jats:italic>\"frozen videos\"</jats:italic> (i.e., repeated images), constructed from image samples generated by the customized T2I model. This training is facilitated by a novel <jats:italic>Motion Adapter</jats:italic> module, which allows us to train on such static videos while preserving the motion prior of the video model. At test time, we remove the Motion Adapter modules and leave in only the trained Spatial Adapters. This restores the motion prior of the T2V model while adhering to the spatial prior of the customized T2I model. We demonstrate the effectiveness of our approach on diverse tasks including personalized, stylized, and conditional generation. In all evaluated scenarios, our method seamlessly integrates the spatial prior of the customized T2I model with a motion prior supplied by the T2V model.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"19 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fluid Implicit Particles on Coadjoint Orbits 共轭轨道上的流体隐含粒子
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687970
Mohammad Sina Nabizadeh, Ritoban Roy-Chowdhury, Hang Yin, Ravi Ramamoorthi, Albert Chern
We propose Coadjoint Orbit FLIP (CO-FLIP), a high order accurate, structure preserving fluid simulation method in the hybrid Eulerian-Lagrangian framework. We start with a Hamiltonian formulation of the incompressible Euler Equations, and then, using a local, explicit, and high order divergence free interpolation, construct a modified Hamiltonian system that governs our discrete Euler flow. The resulting discretization, when paired with a geometric time integration scheme, is energy and circulation preserving (formally the flow evolves on a coadjoint orbit) and is similar to the Fluid Implicit Particle (FLIP) method. CO-FLIP enjoys multiple additional properties including that the pressure projection is exact in the weak sense, and the particle-to-grid transfer is an exact inverse of the grid-to-particle interpolation. The method is demonstrated numerically with outstanding stability, energy, and Casimir preservation. We show that the method produces benchmarks and turbulent visual effects even at low grid resolutions.
我们提出了 Coadjoint Orbit FLIP (CO-FLIP),这是一种在欧拉-拉格朗日混合框架下的高阶精确、结构保持流体模拟方法。我们从不可压缩欧拉方程的哈密顿公式入手,然后利用局部、显式和高阶无发散插值,构建了一个修正的哈密顿系统,用于控制离散欧拉流。由此产生的离散化与几何时间积分方案相配合,具有能量和环流保护性(形式上流动在共轭轨道上演化),类似于流体隐含粒子(FLIP)方法。CO-FLIP 还具有多种附加特性,包括压力投影在弱意义上是精确的,粒子到网格的转移是网格到粒子插值的精确倒数。该方法具有出色的稳定性、能量和卡西米尔保持性,并得到了数值验证。我们表明,即使在低网格分辨率下,该方法也能产生基准和湍流视觉效果。
{"title":"Fluid Implicit Particles on Coadjoint Orbits","authors":"Mohammad Sina Nabizadeh, Ritoban Roy-Chowdhury, Hang Yin, Ravi Ramamoorthi, Albert Chern","doi":"10.1145/3687970","DOIUrl":"https://doi.org/10.1145/3687970","url":null,"abstract":"We propose Coadjoint Orbit FLIP (CO-FLIP), a high order accurate, structure preserving fluid simulation method in the hybrid Eulerian-Lagrangian framework. We start with a Hamiltonian formulation of the incompressible Euler Equations, and then, using a local, explicit, and high order divergence free interpolation, construct a modified Hamiltonian system that governs our discrete Euler flow. The resulting discretization, when paired with a geometric time integration scheme, is energy and circulation preserving (formally the flow evolves on a coadjoint orbit) and is similar to the Fluid Implicit Particle (FLIP) method. CO-FLIP enjoys multiple additional properties including that the pressure projection is exact in the weak sense, and the particle-to-grid transfer is an exact inverse of the grid-to-particle interpolation. The method is demonstrated numerically with outstanding stability, energy, and Casimir preservation. We show that the method produces benchmarks and turbulent visual effects even at low grid resolutions.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"69 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PCO: Precision-Controllable Offset Surfaces with Sharp Features PCO: 具有锐利特征的精密可控偏移表面
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687920
Lei Wang, Xudong Wang, Pengfei Wang, Shuangmin Chen, Shiqing Xin, Jiong Guo, Wenping Wang, Changhe Tu
Surface offsetting is a crucial operation in digital geometry processing and computer-aided design, where an offset is defined as an iso-value surface of the distance field. A challenge emerges as even smooth surfaces can exhibit sharp features in their offsets due to the non-differentiable characteristics of the underlying distance field. Prevailing approaches to the offsetting problem involve approximating the distance field and then extracting the iso-surface. However, even with dual contouring (DC), there is a risk of degrading sharp feature points/lines due to the inaccurate discretization of the distance field. This issue is exacerbated when the input is a piecewise-linear triangle mesh. This study is inspired by the observation that a triangle-based distance field, unlike the complex distance field rooted at the entire surface, remains smooth across the entire 3D space except at the triangle itself. With a polygonal surface comprising n triangles, the final distance field for accommodating the offset surface is determined by minimizing these n triangle-based distance fields. In implementation, our approach starts by tetrahedralizing the space around the offset surface, enabling a tetrahedron-wise linear approximation for each triangle-based distance field. The final offset surface within a tetrahedral range can be traced by slicing the tetrahedron with planes. As illustrated in the teaser figure, a key advantage of our algorithm is its ability to precisely preserve sharp features. Furthermore, this paper addresses the problem of simplifying the offset surface's complexity while preserving sharp features, formulating it as a maximal-clique problem.
曲面偏移是数字几何处理和计算机辅助设计中的一项重要操作,偏移被定义为距离场的等值曲面。由于底层距离场的不可分特性,即使是光滑的表面也会在偏移中表现出尖锐的特征,这就给我们带来了挑战。解决偏移问题的主流方法是近似距离场,然后提取等值面。然而,即使采用双等高线(DC),由于距离场的离散化不准确,也存在锐利特征点/线退化的风险。当输入是片状线性三角形网格时,这一问题会更加严重。与植根于整个表面的复杂距离场不同,基于三角形的距离场在整个三维空间中(除三角形本身外)保持平滑。对于由 n 个三角形组成的多边形表面,通过最小化这 n 个基于三角形的距离场,就能确定用于容纳偏移表面的最终距离场。在实施过程中,我们首先对偏移表面周围的空间进行四面体化,从而对每个三角形距离场进行四面体线性近似。通过对四面体进行平面切分,可以追踪到四面体范围内的最终偏移表面。如预告图所示,我们算法的一个关键优势是能够精确保留尖锐特征。此外,本文还解决了在保留尖锐特征的同时简化偏移曲面复杂性的问题,并将其表述为一个最大角问题。
{"title":"PCO: Precision-Controllable Offset Surfaces with Sharp Features","authors":"Lei Wang, Xudong Wang, Pengfei Wang, Shuangmin Chen, Shiqing Xin, Jiong Guo, Wenping Wang, Changhe Tu","doi":"10.1145/3687920","DOIUrl":"https://doi.org/10.1145/3687920","url":null,"abstract":"Surface offsetting is a crucial operation in digital geometry processing and computer-aided design, where an offset is defined as an iso-value surface of the distance field. A challenge emerges as even smooth surfaces can exhibit sharp features in their offsets due to the non-differentiable characteristics of the underlying distance field. Prevailing approaches to the offsetting problem involve approximating the distance field and then extracting the iso-surface. However, even with dual contouring (DC), there is a risk of degrading sharp feature points/lines due to the inaccurate discretization of the distance field. This issue is exacerbated when the input is a piecewise-linear triangle mesh. This study is inspired by the observation that a triangle-based distance field, unlike the complex distance field rooted at the entire surface, remains smooth across the entire 3D space except at the triangle itself. With a polygonal surface comprising <jats:italic>n</jats:italic> triangles, the final distance field for accommodating the offset surface is determined by minimizing these <jats:italic>n</jats:italic> triangle-based distance fields. In implementation, our approach starts by tetrahedralizing the space around the offset surface, enabling a tetrahedron-wise linear approximation for each triangle-based distance field. The final offset surface within a tetrahedral range can be traced by slicing the tetrahedron with planes. As illustrated in the teaser figure, a key advantage of our algorithm is its ability to precisely preserve sharp features. Furthermore, this paper addresses the problem of simplifying the offset surface's complexity while preserving sharp features, formulating it as a maximal-clique problem.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximation by Meshes with Spherical Faces 用球面网格进行逼近
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687942
Anthony Cisneros Ramos, Martin Kilian, Alisher Aikyn, Helmut Pottmann, Christian Müller
Meshes with spherical faces and circular edges are an attractive alternative to polyhedral meshes for applications in architecture and design. Approximation of a given surface by such a mesh needs to consider the visual appearance, approximation quality, the position and orientation of circular intersections of neighboring faces and the existence of a torsion free support structure that is formed by the planes of circular edges. The latter requirement implies that the mesh simultaneously defines a second mesh whose faces lie on the same spheres as the faces of the first mesh. It is a discretization of the two envelopes of a sphere congruence, i.e., a two-parameter family of spheres. We relate such sphere congruences to torsal parameterizations of associated line congruences. Turning practical requirements into properties of such a line congruence, we optimize line and sphere congruence as a basis for computing a mesh with spherical triangular or quadrilateral faces that approximates a given reference surface.
在建筑和设计领域的应用中,具有球面和圆形边缘的网格是多面体网格的一种有吸引力的替代品。用这种网格逼近给定表面时,需要考虑视觉外观、逼近质量、相邻面的圆形交点的位置和方向,以及是否存在由圆形边缘平面构成的无扭转支撑结构。后一项要求意味着网格同时定义了第二个网格,其面与第一个网格的面位于相同的球面上。它是球面全等的两个包络的离散化,即球面的双参数族。我们将这种球面全等与相关的线面全等的拓扑参数化联系起来。我们将实际需求转化为这种线全等的属性,优化了线全等和球全等,并以此为基础计算出了近似给定参考面的球面三角形或四边形网格。
{"title":"Approximation by Meshes with Spherical Faces","authors":"Anthony Cisneros Ramos, Martin Kilian, Alisher Aikyn, Helmut Pottmann, Christian Müller","doi":"10.1145/3687942","DOIUrl":"https://doi.org/10.1145/3687942","url":null,"abstract":"Meshes with spherical faces and circular edges are an attractive alternative to polyhedral meshes for applications in architecture and design. Approximation of a given surface by such a mesh needs to consider the visual appearance, approximation quality, the position and orientation of circular intersections of neighboring faces and the existence of a torsion free support structure that is formed by the planes of circular edges. The latter requirement implies that the mesh simultaneously defines a second mesh whose faces lie on the same spheres as the faces of the first mesh. It is a discretization of the two envelopes of a sphere congruence, i.e., a two-parameter family of spheres. We relate such sphere congruences to torsal parameterizations of associated line congruences. Turning practical requirements into properties of such a line congruence, we optimize line and sphere congruence as a basis for computing a mesh with spherical triangular or quadrilateral faces that approximates a given reference surface.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"229 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
V^3: Viewing Volumetric Videos on Mobiles via Streamable 2D Dynamic Gaussians V^3:通过可流式二维动态高斯在手机上观看体积视频
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687935
Penghao Wang, Zhirui Zhang, Liao Wang, Kaixin Yao, Siyuan Xie, Jingyi Yu, Minye Wu, Lan Xu
Experiencing high-fidelity volumetric video as seamlessly as 2D videos is a long-held dream. However, current dynamic 3DGS methods, despite their high rendering quality, face challenges in streaming on mobile devices due to computational and bandwidth constraints. In this paper, we introduce V 3 (Viewing Volumetric Videos), a novel approach that enables high-quality mobile rendering through the streaming of dynamic Gaussians. Our key innovation is to view dynamic 3DGS as 2D videos, facilitating the use of hardware video codecs. Additionally, we propose a two-stage training strategy to reduce storage requirements with rapid training speed. The first stage employs hash encoding and shallow MLP to learn motion, then reduces the number of Gaussians through pruning to meet the streaming requirements, while the second stage fine tunes other Gaussian attributes using residual entropy loss and temporal loss to improve temporal continuity. This strategy, which disentangles motion and appearance, maintains high rendering quality with compact storage requirements. Meanwhile, we designed a multi-platform player to decode and render 2D Gaussian videos. Extensive experiments demonstrate the effectiveness of V 3 , outperforming other methods by enabling high-quality rendering and streaming on common devices, which is unseen before. As the first to stream dynamic Gaussians on mobile devices, our companion player offers users an unprecedented volumetric video experience, including smooth scrolling and instant sharing. Our project page with source code is available at https://authoritywang.github.io/v3/.
像 2D 视频一样无缝体验高保真体积视频是人们长久以来的梦想。然而,目前的动态 3DGS 方法尽管具有很高的渲染质量,但由于计算和带宽限制,在移动设备上进行流式传输时面临着挑战。在本文中,我们介绍了 V 3(Viewing Volumetric Videos),这是一种通过动态高斯流实现高质量移动渲染的新方法。我们的主要创新是将动态 3DGS 视作 2D 视频,从而便于使用硬件视频编解码器。此外,我们还提出了一种两阶段训练策略,以快速的训练速度降低存储需求。第一阶段采用哈希编码和浅层 MLP 学习运动,然后通过剪枝减少高斯的数量,以满足流媒体的要求;第二阶段使用残差熵损失和时间损失微调其他高斯属性,以提高时间连续性。这种策略将运动和外观分离开来,在满足紧凑存储要求的同时保持了较高的渲染质量。同时,我们设计了一个多平台播放器来解码和渲染二维高斯视频。广泛的实验证明了 V 3 的有效性,它通过在普通设备上实现高质量的渲染和流媒体播放,超越了其他方法,这在以前是从未有过的。作为首款在移动设备上流式传输动态高斯视频的产品,我们的配套播放器为用户提供了前所未有的体积视频体验,包括流畅滚动和即时分享。我们的项目页面和源代码可在 https://authoritywang.github.io/v3/ 上查阅。
{"title":"V^3: Viewing Volumetric Videos on Mobiles via Streamable 2D Dynamic Gaussians","authors":"Penghao Wang, Zhirui Zhang, Liao Wang, Kaixin Yao, Siyuan Xie, Jingyi Yu, Minye Wu, Lan Xu","doi":"10.1145/3687935","DOIUrl":"https://doi.org/10.1145/3687935","url":null,"abstract":"Experiencing high-fidelity volumetric video as seamlessly as 2D videos is a long-held dream. However, current dynamic 3DGS methods, despite their high rendering quality, face challenges in streaming on mobile devices due to computational and bandwidth constraints. In this paper, we introduce V <jats:sup>3</jats:sup> (Viewing Volumetric Videos), a novel approach that enables high-quality mobile rendering through the streaming of dynamic Gaussians. Our key innovation is to view dynamic 3DGS as 2D videos, facilitating the use of hardware video codecs. Additionally, we propose a two-stage training strategy to reduce storage requirements with rapid training speed. The first stage employs hash encoding and shallow MLP to learn motion, then reduces the number of Gaussians through pruning to meet the streaming requirements, while the second stage fine tunes other Gaussian attributes using residual entropy loss and temporal loss to improve temporal continuity. This strategy, which disentangles motion and appearance, maintains high rendering quality with compact storage requirements. Meanwhile, we designed a multi-platform player to decode and render 2D Gaussian videos. Extensive experiments demonstrate the effectiveness of V <jats:sup>3</jats:sup> , outperforming other methods by enabling high-quality rendering and streaming on common devices, which is unseen before. As the first to stream dynamic Gaussians on mobile devices, our companion player offers users an unprecedented volumetric video experience, including smooth scrolling and instant sharing. Our project page with source code is available at https://authoritywang.github.io/v3/.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"14 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Reconstruction with Fast Dipole Sums 利用快速偶极和进行三维重建
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687914
Hanyu Chen, Bailey Miller, Ioannis Gkioulekas
We introduce a method for high-quality 3D reconstruction from multi-view images. Our method uses a new point-based representation, the regularized dipole sum, which generalizes the winding number to allow for interpolation of per-point attributes in point clouds with noisy or outlier points. Using regularized dipole sums, we represent implicit geometry and radiance fields as per-point attributes of a dense point cloud, which we initialize from structure from motion. We additionally derive Barnes-Hut fast summation schemes for accelerated forward and adjoint dipole sum queries. These queries facilitate the use of ray tracing to efficiently and differentiably render images with our point-based representations, and thus update their point attributes to optimize scene geometry and appearance. We evaluate our method in inverse rendering applications against state-of-the-art alternatives, based on ray tracing of neural representations or rasterization of Gaussian point-based representations. Our method significantly improves 3D reconstruction quality and robustness at equal runtimes, while also supporting more general rendering methods such as shadow rays for direct illumination.
我们介绍了一种从多视角图像进行高质量三维重建的方法。我们的方法使用了一种新的基于点的表示方法--正则化偶极子和,它将缠绕数广义化,允许在有噪声点或离群点的点云中对每点属性进行插值。利用正则化偶极子和,我们将隐含几何和辐射场表示为密集点云的每点属性,并根据运动结构对其进行初始化。此外,我们还推导出了用于加速正向和邻接偶极和查询的巴恩斯-胡特快速求和方案。这些查询便于使用光线追踪来高效、可微分地渲染图像,从而更新点属性,优化场景几何和外观。我们在反渲染应用中评估了我们的方法,并与基于神经表示的光线追踪或基于高斯点表示的光栅化的最先进替代方法进行了比较。在相同的运行时间内,我们的方法大大提高了三维重建的质量和鲁棒性,同时还支持更通用的渲染方法,如直接照射的阴影射线。
{"title":"3D Reconstruction with Fast Dipole Sums","authors":"Hanyu Chen, Bailey Miller, Ioannis Gkioulekas","doi":"10.1145/3687914","DOIUrl":"https://doi.org/10.1145/3687914","url":null,"abstract":"We introduce a method for high-quality 3D reconstruction from multi-view images. Our method uses a new point-based representation, the regularized dipole sum, which generalizes the winding number to allow for interpolation of per-point attributes in point clouds with noisy or outlier points. Using regularized dipole sums, we represent implicit geometry and radiance fields as per-point attributes of a dense point cloud, which we initialize from structure from motion. We additionally derive Barnes-Hut fast summation schemes for accelerated forward and adjoint dipole sum queries. These queries facilitate the use of ray tracing to efficiently and differentiably render images with our point-based representations, and thus update their point attributes to optimize scene geometry and appearance. We evaluate our method in inverse rendering applications against state-of-the-art alternatives, based on ray tracing of neural representations or rasterization of Gaussian point-based representations. Our method significantly improves 3D reconstruction quality and robustness at equal runtimes, while also supporting more general rendering methods such as shadow rays for direct illumination.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"112 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EgoHDM: A Real-time Egocentric-Inertial Human Motion Capture, Localization, and Dense Mapping System EgoHDM:实时脑心惯性人体运动捕捉、定位和密集绘图系统
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687907
Handi Yin, Bonan Liu, Manuel Kaufmann, Jinhao He, Sammy Christen, Jie Song, Pan Hui
We present EgoHDM, an online egocentric-inertial human motion capture (mocap), localization, and dense mapping system. Our system uses 6 inertial measurement units (IMUs) and a commodity head-mounted RGB camera. EgoHDM is the first human mocap system that offers dense scene mapping in near real-time. Further, it is fast and robust to initialize and fully closes the loop between physically plausible map-aware global human motion estimation and mocap-aware 3D scene reconstruction. To achieve this, we design a tightly coupled mocap-aware dense bundle adjustment and physics-based body pose correction module leveraging a local body-centric elevation map. The latter introduces a novel terrain-aware contact PD controller, which enables characters to physically contact the given local elevation map thereby reducing human floating or penetration. We demonstrate the performance of our system on established synthetic and real-world benchmarks. The results show that our method reduces human localization, camera pose, and mapping accuracy error by 41%, 71%, 46%, respectively, compared to the state of the art. Our qualitative evaluations on newly captured data further demonstrate that EgoHDM can cover challenging scenarios in non-flat terrain including stepping over stairs and outdoor scenes in the wild. Our project page: https://handiyin.github.io/EgoHDM/
我们介绍的 EgoHDM 是一种在线自我中心惯性人体动作捕捉(mocap)、定位和密集绘图系统。我们的系统使用 6 个惯性测量单元(IMU)和一个商品头戴式 RGB 摄像机。EgoHDM 是首个可提供近乎实时的密集场景映射的人体动作捕捉系统。此外,它的初始化速度快、鲁棒性强,并能完全闭合物理上可信的地图感知全局人体运动估算和 mocap 感知三维场景重建之间的环路。为此,我们设计了一个紧密耦合的 mocap 感知密集束调整和基于物理的人体姿态校正模块,该模块利用了以人体为中心的局部高程图。后者引入了新颖的地形感知接触 PD 控制器,使角色能够与给定的本地高程图进行物理接触,从而减少人体漂浮或穿透。我们在已建立的合成和真实世界基准上演示了我们系统的性能。结果表明,与现有技术相比,我们的方法将人类定位、摄像机姿势和绘图精度误差分别降低了 41%、71% 和 46%。我们对新捕获的数据进行的定性评估进一步证明,EgoHDM 可以覆盖非平坦地形中的挑战性场景,包括跨过楼梯和野外室外场景。我们的项目页面:https://handiyin.github.io/EgoHDM/
{"title":"EgoHDM: A Real-time Egocentric-Inertial Human Motion Capture, Localization, and Dense Mapping System","authors":"Handi Yin, Bonan Liu, Manuel Kaufmann, Jinhao He, Sammy Christen, Jie Song, Pan Hui","doi":"10.1145/3687907","DOIUrl":"https://doi.org/10.1145/3687907","url":null,"abstract":"We present EgoHDM, an online egocentric-inertial human motion capture (mocap), localization, and dense mapping system. Our system uses 6 inertial measurement units (IMUs) and a commodity head-mounted RGB camera. EgoHDM is the first human mocap system that offers <jats:italic>dense</jats:italic> scene mapping in <jats:italic>near real-time.</jats:italic> Further, it is fast and robust to initialize and fully closes the loop between physically plausible map-aware global human motion estimation and mocap-aware 3D scene reconstruction. To achieve this, we design a tightly coupled mocap-aware dense bundle adjustment and physics-based body pose correction module leveraging a local body-centric elevation map. The latter introduces a novel terrain-aware contact PD controller, which enables characters to physically contact the given local elevation map thereby reducing human floating or penetration. We demonstrate the performance of our system on established synthetic and real-world benchmarks. The results show that our method reduces human localization, camera pose, and mapping accuracy error by 41%, 71%, 46%, respectively, compared to the state of the art. Our qualitative evaluations on newly captured data further demonstrate that EgoHDM can cover challenging scenarios in non-flat terrain including stepping over stairs and outdoor scenes in the wild. Our project page: https://handiyin.github.io/EgoHDM/","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"176 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Direct Manipulation of Procedural Implicit Surfaces 直接操纵程序隐含曲面
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687936
Marzia Riso, Élie Michel, Axel Paris, Valentin Deschaintre, Mathieu Gaillard, Fabio Pellacini
Procedural implicit surfaces are a popular representation for shape modeling. They provide a simple framework for complex geometric operations such as Booleans, blending and deformations. However, their editability remains a challenging task: as the definition of the shape is purely implicit, direct manipulation of the shape cannot be performed. Thus, parameters of the model are often exposed through abstract sliders, which have to be nontrivially created by the user and understood by others for each individual model to modify. Further, each of these sliders needs to be set one by one to achieve the desired appearance. To circumvent this laborious process while preserving editability, we propose to directly manipulate the implicit surface in the viewport. We let the user naturally interact with the output shape, leveraging points on a co-parameterization we design specifically for implicit surfaces, to guide the parameter updates and reach the desired appearance faster. We leverage our automatic differentiation of the procedural implicit surface to propagate interactions made by the user in the viewport to the shape parameters themselves. We further design a solver that uses such information to guide an intuitive and smooth user workflow. We demonstrate different editing processes across multiple implicit shapes and parameters that would be tedious by tuning sliders.
程序化隐式曲面是一种常用的形状建模表示方法。它们为布尔运算、混合和变形等复杂的几何操作提供了一个简单的框架。然而,它们的可编辑性仍然是一项具有挑战性的任务:由于形状的定义是纯隐式的,因此无法对形状进行直接操作。因此,模型的参数通常通过抽象的滑动条来显示,而这些滑动条必须由用户创建,并由其他人理解,才能对每个模型进行修改。此外,每个滑块都需要逐个设置,以达到所需的外观效果。为了避免这一费力的过程,同时保留可编辑性,我们建议在视口中直接操作隐式曲面。我们让用户与输出形状自然交互,利用我们专门为隐式曲面设计的共参数化点来引导参数更新,从而更快地达到所需的外观效果。我们利用程序化隐式曲面的自动区分功能,将用户在视口中进行的交互传播到形状参数本身。我们进一步设计了一个求解器,利用这些信息来引导直观流畅的用户工作流程。我们在多个隐式形状和参数上演示了不同的编辑流程,而这些流程通过调整滑块是非常繁琐的。
{"title":"Direct Manipulation of Procedural Implicit Surfaces","authors":"Marzia Riso, Élie Michel, Axel Paris, Valentin Deschaintre, Mathieu Gaillard, Fabio Pellacini","doi":"10.1145/3687936","DOIUrl":"https://doi.org/10.1145/3687936","url":null,"abstract":"Procedural implicit surfaces are a popular representation for shape modeling. They provide a simple framework for complex geometric operations such as Booleans, blending and deformations. However, their editability remains a challenging task: as the definition of the shape is purely implicit, direct manipulation of the shape cannot be performed. Thus, parameters of the model are often exposed through abstract sliders, which have to be nontrivially created by the user and understood by others for each individual model to modify. Further, each of these sliders needs to be set one by one to achieve the desired appearance. To circumvent this laborious process while preserving editability, we propose to directly manipulate the implicit surface in the viewport. We let the user naturally interact with the output shape, leveraging points on a co-parameterization we design specifically for implicit surfaces, to guide the parameter updates and reach the desired appearance faster. We leverage our automatic differentiation of the procedural implicit surface to propagate interactions made by the user in the viewport to the shape parameters themselves. We further design a solver that uses such information to guide an intuitive and smooth user workflow. We demonstrate different editing processes across multiple implicit shapes and parameters that would be tedious by tuning sliders.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"18 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1