首页 > 最新文献

arXiv - CS - Graphics最新文献

英文 中文
Subsurface Scattering for 3D Gaussian Splatting 三维高斯溅射的次表层散射
Pub Date : 2024-08-22 DOI: arxiv-2408.12282
Jan-Niklas Dihlmann, Arjun Majumdar, Andreas Engelhardt, Raphael Braun, Hendrik P. A. Lensch
3D reconstruction and relighting of objects made from scattering materialspresent a significant challenge due to the complex light transport beneath thesurface. 3D Gaussian Splatting introduced high-quality novel view synthesis atreal-time speeds. While 3D Gaussians efficiently approximate an object'ssurface, they fail to capture the volumetric properties of subsurfacescattering. We propose a framework for optimizing an object's shape togetherwith the radiance transfer field given multi-view OLAT (one light at a time)data. Our method decomposes the scene into an explicit surface represented as3D Gaussians, with a spatially varying BRDF, and an implicit volumetricrepresentation of the scattering component. A learned incident light fieldaccounts for shadowing. We optimize all parameters jointly via ray-traceddifferentiable rendering. Our approach enables material editing, relighting andnovel view synthesis at interactive rates. We show successful application onsynthetic data and introduce a newly acquired multi-view multi-light dataset ofobjects in a light-stage setup. Compared to previous work we achieve comparableor better results at a fraction of optimization and rendering time whileenabling detailed control over material attributes. Project pagehttps://sss.jdihlmann.com/
由于表面下复杂的光传输,对散射材料制成的物体进行三维重建和重新照明是一项重大挑战。三维高斯拼接技术以实时速度引入了高质量的新型视图合成。虽然三维高斯能有效地逼近物体表面,但却无法捕捉次表面散射的体积特性。我们提出了一个框架,用于在给定多视角 OLAT(一次一束光)数据的情况下,将物体的形状与辐射传递场结合起来进行优化。我们的方法将场景分解为以三维高斯表示的显式表面(具有空间变化的 BRDF)和散射分量的隐式体积表示。学习的入射光场考虑了阴影。我们通过光线跟踪差异渲染联合优化所有参数。我们的方法能够以交互式速率进行材料编辑、重新照明和新视角合成。我们展示了在合成数据上的成功应用,并介绍了在光舞台设置中新获得的物体多视角多光照数据集。与之前的工作相比,我们只用了一小部分优化和渲染时间就取得了相当或更好的效果,同时还能对材料属性进行详细控制。项目页面https://sss.jdihlmann.com/
{"title":"Subsurface Scattering for 3D Gaussian Splatting","authors":"Jan-Niklas Dihlmann, Arjun Majumdar, Andreas Engelhardt, Raphael Braun, Hendrik P. A. Lensch","doi":"arxiv-2408.12282","DOIUrl":"https://doi.org/arxiv-2408.12282","url":null,"abstract":"3D reconstruction and relighting of objects made from scattering materials\u0000present a significant challenge due to the complex light transport beneath the\u0000surface. 3D Gaussian Splatting introduced high-quality novel view synthesis at\u0000real-time speeds. While 3D Gaussians efficiently approximate an object's\u0000surface, they fail to capture the volumetric properties of subsurface\u0000scattering. We propose a framework for optimizing an object's shape together\u0000with the radiance transfer field given multi-view OLAT (one light at a time)\u0000data. Our method decomposes the scene into an explicit surface represented as\u00003D Gaussians, with a spatially varying BRDF, and an implicit volumetric\u0000representation of the scattering component. A learned incident light field\u0000accounts for shadowing. We optimize all parameters jointly via ray-traced\u0000differentiable rendering. Our approach enables material editing, relighting and\u0000novel view synthesis at interactive rates. We show successful application on\u0000synthetic data and introduce a newly acquired multi-view multi-light dataset of\u0000objects in a light-stage setup. Compared to previous work we achieve comparable\u0000or better results at a fraction of optimization and rendering time while\u0000enabling detailed control over material attributes. Project page\u0000https://sss.jdihlmann.com/","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pano2Room: Novel View Synthesis from a Single Indoor Panorama Pano2Room:从单一室内全景图合成新颖视图
Pub Date : 2024-08-21 DOI: arxiv-2408.11413
Guo Pu, Yiming Zhao, Zhouhui Lian
Recent single-view 3D generative methods have made significant advancementsby leveraging knowledge distilled from extensive 3D object datasets. However,challenges persist in the synthesis of 3D scenes from a single view, primarilydue to the complexity of real-world environments and the limited availabilityof high-quality prior resources. In this paper, we introduce a novel approachcalled Pano2Room, designed to automatically reconstruct high-quality 3D indoorscenes from a single panoramic image. These panoramic images can be easilygenerated using a panoramic RGBD inpainter from captures at a single locationwith any camera. The key idea is to initially construct a preliminary mesh fromthe input panorama, and iteratively refine this mesh using a panoramic RGBDinpainter while collecting photo-realistic 3D-consistent pseudo novel views.Finally, the refined mesh is converted into a 3D Gaussian Splatting field andtrained with the collected pseudo novel views. This pipeline enables thereconstruction of real-world 3D scenes, even in the presence of largeocclusions, and facilitates the synthesis of photo-realistic novel views withdetailed geometry. Extensive qualitative and quantitative experiments have beenconducted to validate the superiority of our method in single-panorama indoornovel synthesis compared to the state-of-the-art. Our code and data areavailable at url{https://github.com/TrickyGo/Pano2Room}.
最近的单视角三维生成方法利用从大量三维物体数据集中提炼出的知识,取得了重大进展。然而,从单一视角合成三维场景的挑战依然存在,这主要是由于现实世界环境的复杂性和高质量先验资源的有限性。在本文中,我们介绍了一种名为 Pano2Room 的新方法,旨在从单个全景图像自动重建高质量的三维室内场景。这些全景图像可以使用全景 RGBD inpainter 从单个位置的任意摄像头捕捉的图像中轻松生成。其主要思路是,首先根据输入的全景图像构建初步网格,然后使用全景 RGBD inpainter 迭代完善该网格,同时收集逼真的 3D 一致伪新视图。即使在存在大量夹杂物的情况下,该流水线也能构建真实世界的三维场景,并有助于合成具有详细几何图形的照片般逼真的新视图。我们进行了大量定性和定量实验,验证了我们的方法在单全景室内小说合成方面优于最先进的方法。我们的代码和数据可在(url{https://github.com/TrickyGo/Pano2Room}.
{"title":"Pano2Room: Novel View Synthesis from a Single Indoor Panorama","authors":"Guo Pu, Yiming Zhao, Zhouhui Lian","doi":"arxiv-2408.11413","DOIUrl":"https://doi.org/arxiv-2408.11413","url":null,"abstract":"Recent single-view 3D generative methods have made significant advancements\u0000by leveraging knowledge distilled from extensive 3D object datasets. However,\u0000challenges persist in the synthesis of 3D scenes from a single view, primarily\u0000due to the complexity of real-world environments and the limited availability\u0000of high-quality prior resources. In this paper, we introduce a novel approach\u0000called Pano2Room, designed to automatically reconstruct high-quality 3D indoor\u0000scenes from a single panoramic image. These panoramic images can be easily\u0000generated using a panoramic RGBD inpainter from captures at a single location\u0000with any camera. The key idea is to initially construct a preliminary mesh from\u0000the input panorama, and iteratively refine this mesh using a panoramic RGBD\u0000inpainter while collecting photo-realistic 3D-consistent pseudo novel views.\u0000Finally, the refined mesh is converted into a 3D Gaussian Splatting field and\u0000trained with the collected pseudo novel views. This pipeline enables the\u0000reconstruction of real-world 3D scenes, even in the presence of large\u0000occlusions, and facilitates the synthesis of photo-realistic novel views with\u0000detailed geometry. Extensive qualitative and quantitative experiments have been\u0000conducted to validate the superiority of our method in single-panorama indoor\u0000novel synthesis compared to the state-of-the-art. Our code and data are\u0000available at url{https://github.com/TrickyGo/Pano2Room}.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bimodal Visualization of Industrial X-Ray and Neutron Computed Tomography Data 工业 X 射线和中子计算机断层扫描数据的双模态可视化
Pub Date : 2024-08-21 DOI: arxiv-2408.11957
Xuan Huang, Haichao Miao, Hyojin Kim, Andrew Townsend, Kyle Champley, Joseph Tringe, Valerio Pascucci, Peer-Timo Bremer
Advanced manufacturing creates increasingly complex objects with materialcompositions that are often difficult to characterize by a single modality. Ourcollaborating domain scientists are going beyond traditional methods byemploying both X-ray and neutron computed tomography to obtain complementaryrepresentations expected to better resolve material boundaries. However, theuse of two modalities creates its own challenges for visualization, requiringeither complex adjustments of bimodal transfer functions or the need formultiple views. Together with experts in nondestructive evaluation, we designeda novel interactive bimodal visualization approach to create a combined view ofthe co-registered X-ray and neutron acquisitions of industrial objects. Usingan automatic topological segmentation of the bivariate histogram of X-ray andneutron values as a starting point, the system provides a simple yet effectiveinterface to easily create, explore, and adjust a bimodal visualization. Wepropose a widget with simple brushing interactions that enables the user toquickly correct the segmented histogram results. Our semiautomated systemenables domain experts to intuitively explore large bimodal datasets withoutthe need for either advanced segmentation algorithms or knowledge ofvisualization techniques. We demonstrate our approach using synthetic examp
先进的制造业制造出越来越复杂的物体,其材料构成往往难以用单一模式来表征。我们合作的领域科学家正在超越传统方法,同时采用 X 射线和中子计算机断层扫描技术来获得互补的显示结果,从而更好地解析材料边界。然而,两种模式的使用给可视化带来了挑战,需要对双模传递函数进行复杂的调整,或者需要形成多个视图。我们与无损评估专家一起设计了一种新颖的交互式双模可视化方法,以创建工业物体的共聚合 X 射线和中子采集图像的组合视图。该系统以 X 射线和中子值的双变量直方图的自动拓扑分割为起点,提供了一个简单而有效的界面,可轻松创建、探索和调整双模态可视化。我们提出了一个具有简单刷洗交互功能的小工具,使用户能够快速修正分割后的直方图结果。我们的半自动系统能让领域专家直观地探索大型双峰数据集,而无需高级分割算法或可视化技术知识。我们使用合成示例来演示我们的方法。
{"title":"Bimodal Visualization of Industrial X-Ray and Neutron Computed Tomography Data","authors":"Xuan Huang, Haichao Miao, Hyojin Kim, Andrew Townsend, Kyle Champley, Joseph Tringe, Valerio Pascucci, Peer-Timo Bremer","doi":"arxiv-2408.11957","DOIUrl":"https://doi.org/arxiv-2408.11957","url":null,"abstract":"Advanced manufacturing creates increasingly complex objects with material\u0000compositions that are often difficult to characterize by a single modality. Our\u0000collaborating domain scientists are going beyond traditional methods by\u0000employing both X-ray and neutron computed tomography to obtain complementary\u0000representations expected to better resolve material boundaries. However, the\u0000use of two modalities creates its own challenges for visualization, requiring\u0000either complex adjustments of bimodal transfer functions or the need for\u0000multiple views. Together with experts in nondestructive evaluation, we designed\u0000a novel interactive bimodal visualization approach to create a combined view of\u0000the co-registered X-ray and neutron acquisitions of industrial objects. Using\u0000an automatic topological segmentation of the bivariate histogram of X-ray and\u0000neutron values as a starting point, the system provides a simple yet effective\u0000interface to easily create, explore, and adjust a bimodal visualization. We\u0000propose a widget with simple brushing interactions that enables the user to\u0000quickly correct the segmented histogram results. Our semiautomated system\u0000enables domain experts to intuitively explore large bimodal datasets without\u0000the need for either advanced segmentation algorithms or knowledge of\u0000visualization techniques. We demonstrate our approach using synthetic examp","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"434 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iterative Object Count Optimization for Text-to-image Diffusion Models 文本到图像扩散模型的迭代对象计数优化
Pub Date : 2024-08-21 DOI: arxiv-2408.11721
Oz Zafar, Lior Wolf, Idan Schwartz
We address a persistent challenge in text-to-image models: accuratelygenerating a specified number of objects. Current models, which learn fromimage-text pairs, inherently struggle with counting, as training data cannotdepict every possible number of objects for any given object. To solve this, wepropose optimizing the generated image based on a counting loss derived from acounting model that aggregates an object's potential. Employing anout-of-the-box counting model is challenging for two reasons: first, the modelrequires a scaling hyperparameter for the potential aggregation that variesdepending on the viewpoint of the objects, and second, classifier guidancetechniques require modified models that operate on noisy intermediate diffusionsteps. To address these challenges, we propose an iterated online training modethat improves the accuracy of inferred images while altering the textconditioning embedding and dynamically adjusting hyperparameters. Our methodoffers three key advantages: (i) it can consider non-derivable countingtechniques based on detection models, (ii) it is a zero-shot plug-and-playsolution facilitating rapid changes to the counting techniques and imagegeneration methods, and (iii) the optimized counting token can be reused togenerate accurate images without additional optimization. We evaluate thegeneration of various objects and show significant improvements in accuracy.The project page is available at https://ozzafar.github.io/count_token.
我们解决了文本到图像模型中的一个长期难题:准确生成指定数量的对象。当前的模型是从图像-文本对中学习的,在计数方面存在固有的困难,因为训练数据无法描述任何给定对象的所有可能数量。为了解决这个问题,我们建议根据从计算模型中得出的计数损失来优化生成的图像,该模型汇总了对象的潜力。采用开箱即用的计数模型具有挑战性,原因有二:首先,该模型需要一个根据物体视角而变化的势能聚合缩放超参数;其次,分类器指导技术需要对噪声中间扩散步骤进行操作的修正模型。为了应对这些挑战,我们提出了一种迭代在线训练模式,在改变文本条件嵌入和动态调整超参数的同时,提高推断图像的准确性。我们的方法有三个主要优势:(i) 它可以考虑基于检测模型的不可减损计数技术;(ii) 它是一种零点即插即用的解决方案,便于快速更改计数技术和图像生成方法;(iii) 优化后的计数标记可以重复用于生成准确的图像,无需额外优化。我们对各种对象的生成进行了评估,结果表明精确度有了显著提高。项目页面见 https://ozzafar.github.io/count_token。
{"title":"Iterative Object Count Optimization for Text-to-image Diffusion Models","authors":"Oz Zafar, Lior Wolf, Idan Schwartz","doi":"arxiv-2408.11721","DOIUrl":"https://doi.org/arxiv-2408.11721","url":null,"abstract":"We address a persistent challenge in text-to-image models: accurately\u0000generating a specified number of objects. Current models, which learn from\u0000image-text pairs, inherently struggle with counting, as training data cannot\u0000depict every possible number of objects for any given object. To solve this, we\u0000propose optimizing the generated image based on a counting loss derived from a\u0000counting model that aggregates an object's potential. Employing an\u0000out-of-the-box counting model is challenging for two reasons: first, the model\u0000requires a scaling hyperparameter for the potential aggregation that varies\u0000depending on the viewpoint of the objects, and second, classifier guidance\u0000techniques require modified models that operate on noisy intermediate diffusion\u0000steps. To address these challenges, we propose an iterated online training mode\u0000that improves the accuracy of inferred images while altering the text\u0000conditioning embedding and dynamically adjusting hyperparameters. Our method\u0000offers three key advantages: (i) it can consider non-derivable counting\u0000techniques based on detection models, (ii) it is a zero-shot plug-and-play\u0000solution facilitating rapid changes to the counting techniques and image\u0000generation methods, and (iii) the optimized counting token can be reused to\u0000generate accurate images without additional optimization. We evaluate the\u0000generation of various objects and show significant improvements in accuracy.\u0000The project page is available at https://ozzafar.github.io/count_token.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEGAS: Detailed Expressions on Full-Body Gaussian Avatars DEGAS:全身高斯头像的详细表达
Pub Date : 2024-08-20 DOI: arxiv-2408.10588
Zhijing Shao, Duotun Wang, Qing-Yao Tian, Yao-Dong Yang, Hengyu Meng, Zeyu Cai, Bo Dong, Yu Zhang, Kang Zhang, Zeyu Wang
Although neural rendering has made significant advancements in creatinglifelike, animatable full-body and head avatars, incorporating detailedexpressions into full-body avatars remains largely unexplored. We presentDEGAS, the first 3D Gaussian Splatting (3DGS)-based modeling method forfull-body avatars with rich facial expressions. Trained on multiview videos ofa given subject, our method learns a conditional variational autoencoder thattakes both the body motion and facial expression as driving signals to generateGaussian maps in the UV layout. To drive the facial expressions, instead of thecommonly used 3D Morphable Models (3DMMs) in 3D head avatars, we propose toadopt the expression latent space trained solely on 2D portrait images,bridging the gap between 2D talking faces and 3D avatars. Leveraging therendering capability of 3DGS and the rich expressiveness of the expressionlatent space, the learned avatars can be reenacted to reproduce photorealisticrendering images with subtle and accurate facial expressions. Experiments on anexisting dataset and our newly proposed dataset of full-body talking avatarsdemonstrate the efficacy of our method. We also propose an audio-drivenextension of our method with the help of 2D talking faces, opening newpossibilities to interactive AI agents.
尽管神经渲染技术在创建栩栩如生、可动画化的全身和头部头像方面取得了重大进展,但在全身头像中融入细致表情的技术在很大程度上仍未得到探索。我们展示了首个基于 3D 高斯拼接(3DGS)的建模方法--DEGAS,用于制作面部表情丰富的全身头像。我们的方法在给定对象的多视角视频上进行训练,学习条件变异自动编码器,将身体运动和面部表情作为驱动信号,在 UV 布局中生成高斯图。为了驱动面部表情,我们建议采用仅在二维肖像图像上训练的表情潜空间,而不是三维头像中常用的三维可变形模型(3DMM),从而缩小了二维会说话的人脸和三维头像之间的差距。利用 3DGS 的增强能力和表情潜空间的丰富表现力,学习到的头像可以重现逼真的渲染图像,并带有微妙而准确的面部表情。在现有数据集和我们新提出的全身会说话的头像数据集上进行的实验证明了我们方法的有效性。我们还提出了一种音频驱动的扩展方法,借助二维会说话的人脸,为交互式人工智能代理开辟了新的可能性。
{"title":"DEGAS: Detailed Expressions on Full-Body Gaussian Avatars","authors":"Zhijing Shao, Duotun Wang, Qing-Yao Tian, Yao-Dong Yang, Hengyu Meng, Zeyu Cai, Bo Dong, Yu Zhang, Kang Zhang, Zeyu Wang","doi":"arxiv-2408.10588","DOIUrl":"https://doi.org/arxiv-2408.10588","url":null,"abstract":"Although neural rendering has made significant advancements in creating\u0000lifelike, animatable full-body and head avatars, incorporating detailed\u0000expressions into full-body avatars remains largely unexplored. We present\u0000DEGAS, the first 3D Gaussian Splatting (3DGS)-based modeling method for\u0000full-body avatars with rich facial expressions. Trained on multiview videos of\u0000a given subject, our method learns a conditional variational autoencoder that\u0000takes both the body motion and facial expression as driving signals to generate\u0000Gaussian maps in the UV layout. To drive the facial expressions, instead of the\u0000commonly used 3D Morphable Models (3DMMs) in 3D head avatars, we propose to\u0000adopt the expression latent space trained solely on 2D portrait images,\u0000bridging the gap between 2D talking faces and 3D avatars. Leveraging the\u0000rendering capability of 3DGS and the rich expressiveness of the expression\u0000latent space, the learned avatars can be reenacted to reproduce photorealistic\u0000rendering images with subtle and accurate facial expressions. Experiments on an\u0000existing dataset and our newly proposed dataset of full-body talking avatars\u0000demonstrate the efficacy of our method. We also propose an audio-driven\u0000extension of our method with the help of 2D talking faces, opening new\u0000possibilities to interactive AI agents.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MeshFormer: High-Quality Mesh Generation with 3D-Guided Reconstruction Model MeshFormer:利用三维引导重建模型生成高质量网格
Pub Date : 2024-08-19 DOI: arxiv-2408.10198
Minghua Liu, Chong Zeng, Xinyue Wei, Ruoxi Shi, Linghao Chen, Chao Xu, Mengqi Zhang, Zhaoning Wang, Xiaoshuai Zhang, Isabella Liu, Hongzhi Wu, Hao Su
Open-world 3D reconstruction models have recently garnered significantattention. However, without sufficient 3D inductive bias, existing methodstypically entail expensive training costs and struggle to extract high-quality3D meshes. In this work, we introduce MeshFormer, a sparse-view reconstructionmodel that explicitly leverages 3D native structure, input guidance, andtraining supervision. Specifically, instead of using a triplane representation,we store features in 3D sparse voxels and combine transformers with 3Dconvolutions to leverage an explicit 3D structure and projective bias. Inaddition to sparse-view RGB input, we require the network to take input andgenerate corresponding normal maps. The input normal maps can be predicted by2D diffusion models, significantly aiding in the guidance and refinement of thegeometry's learning. Moreover, by combining Signed Distance Function (SDF)supervision with surface rendering, we directly learn to generate high-qualitymeshes without the need for complex multi-stage training processes. Byincorporating these explicit 3D biases, MeshFormer can be trained efficientlyand deliver high-quality textured meshes with fine-grained geometric details.It can also be integrated with 2D diffusion models to enable fastsingle-image-to-3D and text-to-3D tasks. Project page:https://meshformer3d.github.io
开放世界三维重建模型近来备受关注。然而,如果没有足够的三维归纳偏差,现有方法通常会带来昂贵的训练成本,并且难以提取高质量的三维网格。在这项工作中,我们介绍了一种稀疏视图重建模型--MeshFormer,它明确利用了三维原生结构、输入引导和训练监督。具体来说,我们不使用三平面表示法,而是将特征存储在三维稀疏体素中,并将变换器与三维卷积结合起来,以利用明确的三维结构和投影偏置。除了稀疏视图 RGB 输入外,我们还要求网络接收输入并生成相应的法线图。输入的法线图可以通过二维扩散模型进行预测,从而大大有助于指导和完善几何学习。此外,通过将符号距离函数(SDF)监督与曲面渲染相结合,我们可以直接学习生成高质量的网格,而无需复杂的多阶段训练过程。通过结合这些显式三维偏差,MeshFormer 可以高效地进行训练,并提供具有细粒度几何细节的高质量纹理网格,它还可以与二维扩散模型集成,实现快速的单图像到三维和文本到三维任务。项目页面:https://meshformer3d.github.io
{"title":"MeshFormer: High-Quality Mesh Generation with 3D-Guided Reconstruction Model","authors":"Minghua Liu, Chong Zeng, Xinyue Wei, Ruoxi Shi, Linghao Chen, Chao Xu, Mengqi Zhang, Zhaoning Wang, Xiaoshuai Zhang, Isabella Liu, Hongzhi Wu, Hao Su","doi":"arxiv-2408.10198","DOIUrl":"https://doi.org/arxiv-2408.10198","url":null,"abstract":"Open-world 3D reconstruction models have recently garnered significant\u0000attention. However, without sufficient 3D inductive bias, existing methods\u0000typically entail expensive training costs and struggle to extract high-quality\u00003D meshes. In this work, we introduce MeshFormer, a sparse-view reconstruction\u0000model that explicitly leverages 3D native structure, input guidance, and\u0000training supervision. Specifically, instead of using a triplane representation,\u0000we store features in 3D sparse voxels and combine transformers with 3D\u0000convolutions to leverage an explicit 3D structure and projective bias. In\u0000addition to sparse-view RGB input, we require the network to take input and\u0000generate corresponding normal maps. The input normal maps can be predicted by\u00002D diffusion models, significantly aiding in the guidance and refinement of the\u0000geometry's learning. Moreover, by combining Signed Distance Function (SDF)\u0000supervision with surface rendering, we directly learn to generate high-quality\u0000meshes without the need for complex multi-stage training processes. By\u0000incorporating these explicit 3D biases, MeshFormer can be trained efficiently\u0000and deliver high-quality textured meshes with fine-grained geometric details.\u0000It can also be integrated with 2D diffusion models to enable fast\u0000single-image-to-3D and text-to-3D tasks. Project page:\u0000https://meshformer3d.github.io","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Representation of Shape-Dependent Laplacian Eigenfunctions 形状相关拉普拉奇特征函数的神经表征
Pub Date : 2024-08-19 DOI: arxiv-2408.10099
Yue Chang, Otman Benchekroun, Maurizio M. Chiaramonte, Peter Yichen Chen, Eitan Grinspun
The eigenfunctions of the Laplace operator are essential in mathematicalphysics, engineering, and geometry processing. Typically, these are computed bydiscretizing the domain and performing eigendecomposition, tying the results toa specific mesh. However, this method is unsuitable forcontinuously-parameterized shapes. We propose a novel representation for eigenfunctions incontinuously-parameterized shape spaces, where eigenfunctions are spatialfields with continuous dependence on shape parameters, defined by minimalDirichlet energy, unit norm, and mutual orthogonality. We implement this withmultilayer perceptrons trained as neural fields, mapping shape parameters anddomain positions to eigenfunction values. A unique challenge is enforcing mutual orthogonality with respect tocausality, where the causal ordering varies across the shape space. Ourtraining method therefore requires three interwoven concepts: (1) learning $n$eigenfunctions concurrently by minimizing Dirichlet energy with unit normconstraints; (2) filtering gradients during backpropagation to enforce causalorthogonality, preventing earlier eigenfunctions from being influenced by laterones; (3) dynamically sorting the causal ordering based on eigenvalues to trackeigenvalue curve crossovers. We demonstrate our method on problems such as shape family analysis,predicting eigenfunctions for incomplete shapes, interactive shapemanipulation, and computing higher-dimensional eigenfunctions, on all of whichtraditional methods fall short.
拉普拉斯算子的特征函数在数学物理、工程和几何处理中至关重要。通常情况下,计算方法是将域离散化,然后进行特征分解,将结果与特定网格绑定。然而,这种方法不适合连续参数化的形状。我们为连续参数化形状空间中的特征函数提出了一种新的表示方法,其中特征函数是对形状参数具有连续依赖性的空间场,由最小 Dirichlet 能量、单位规范和相互正交性定义。我们使用训练成神经场的多层感知器来实现这一点,将形状参数和域位置映射到特征函数值。一个独特的挑战是在形状空间中因果排序各不相同的情况下,如何确保相互正交性。因此,我们的训练方法需要三个相互交织的概念:(1) 通过最小化具有单位规范约束的 Dirichlet 能量,同时学习 $n$ 特征函数;(2) 在反向传播过程中过滤梯度,以执行因果正交性,防止早期特征函数受到后期特征函数的影响;(3) 根据特征值动态排序因果顺序,以跟踪特征值曲线交叉。我们在形状族分析、预测不完整形状的特征函数、交互式形状操纵和计算高维特征函数等问题上演示了我们的方法,传统方法在这些问题上都存在不足。
{"title":"Neural Representation of Shape-Dependent Laplacian Eigenfunctions","authors":"Yue Chang, Otman Benchekroun, Maurizio M. Chiaramonte, Peter Yichen Chen, Eitan Grinspun","doi":"arxiv-2408.10099","DOIUrl":"https://doi.org/arxiv-2408.10099","url":null,"abstract":"The eigenfunctions of the Laplace operator are essential in mathematical\u0000physics, engineering, and geometry processing. Typically, these are computed by\u0000discretizing the domain and performing eigendecomposition, tying the results to\u0000a specific mesh. However, this method is unsuitable for\u0000continuously-parameterized shapes. We propose a novel representation for eigenfunctions in\u0000continuously-parameterized shape spaces, where eigenfunctions are spatial\u0000fields with continuous dependence on shape parameters, defined by minimal\u0000Dirichlet energy, unit norm, and mutual orthogonality. We implement this with\u0000multilayer perceptrons trained as neural fields, mapping shape parameters and\u0000domain positions to eigenfunction values. A unique challenge is enforcing mutual orthogonality with respect to\u0000causality, where the causal ordering varies across the shape space. Our\u0000training method therefore requires three interwoven concepts: (1) learning $n$\u0000eigenfunctions concurrently by minimizing Dirichlet energy with unit norm\u0000constraints; (2) filtering gradients during backpropagation to enforce causal\u0000orthogonality, preventing earlier eigenfunctions from being influenced by later\u0000ones; (3) dynamically sorting the causal ordering based on eigenvalues to track\u0000eigenvalue curve crossovers. We demonstrate our method on problems such as shape family analysis,\u0000predicting eigenfunctions for incomplete shapes, interactive shape\u0000manipulation, and computing higher-dimensional eigenfunctions, on all of which\u0000traditional methods fall short.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"284 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double-Precision Floating-Point Data Visualizations Using Vulkan API 使用 Vulkan API 实现双精度浮点数据可视化
Pub Date : 2024-08-19 DOI: arxiv-2408.09699
Nezihe Sozen
Proper representation of data in graphical visualizations becomes challengingwhen high accuracy in data types is required, especially in those situationswhere the difference between double-precision floating-point andsingle-precision floating-point values makes a significant difference. Some ofthe limitations of using single-precision over double-precision include lesseraccuracy, which accumulates errors over time, and poor modeling of large orsmall numbers. In such scenarios, emulated double precision is often used as asolution. The proposed methodology uses a modern GPU pipeline and graphicslibrary API specifications to use native double precision. In this research,the approach is implemented using the Vulkan API, C++, and GLSL. Experimentalevaluation with a series of experiments on 2D and 3D point datasets is proposedto indicate the effectiveness of the approach. This evaluates performancecomparisons between native double-precision implementations against theiremulated double-precision approaches with respect to rendering performance andaccuracy. This study provides insight into the benefits of using nativedouble-precision in graphical applications, denoting limitations and problemswith emulated double-precision usages. These results improve the generalunderstanding of the precision involved in graphical visualizations and assistdevelopers in making decisions about which precision methods to use duringtheir applications.
当需要高精度的数据类型时,尤其是在双精度浮点数值和单精度浮点数值之间存在显著差异的情况下,在图形可视化中正确表示数据就变得具有挑战性。与双精度相比,使用单精度的一些局限性包括精度较低,会随着时间的推移而累积误差,以及对大数或小数的建模能力较差。在这种情况下,通常使用模拟双精度作为解决方案。建议的方法使用现代 GPU 管道和图形库 API 规范来使用本地双精度。本研究使用 Vulkan API、C++ 和 GLSL 实现了该方法。为了说明该方法的有效性,提出了一系列关于二维和三维点数据集的实验评估。在渲染性能和准确性方面,评估了本地双精度实现与模拟双精度方法之间的性能比较。这项研究深入探讨了在图形应用中使用本地双精度的好处,指出了模拟双精度使用的局限性和问题。这些结果加深了人们对图形可视化所涉及精度的总体理解,有助于开发人员决定在其应用中使用哪种精度方法。
{"title":"Double-Precision Floating-Point Data Visualizations Using Vulkan API","authors":"Nezihe Sozen","doi":"arxiv-2408.09699","DOIUrl":"https://doi.org/arxiv-2408.09699","url":null,"abstract":"Proper representation of data in graphical visualizations becomes challenging\u0000when high accuracy in data types is required, especially in those situations\u0000where the difference between double-precision floating-point and\u0000single-precision floating-point values makes a significant difference. Some of\u0000the limitations of using single-precision over double-precision include lesser\u0000accuracy, which accumulates errors over time, and poor modeling of large or\u0000small numbers. In such scenarios, emulated double precision is often used as a\u0000solution. The proposed methodology uses a modern GPU pipeline and graphics\u0000library API specifications to use native double precision. In this research,\u0000the approach is implemented using the Vulkan API, C++, and GLSL. Experimental\u0000evaluation with a series of experiments on 2D and 3D point datasets is proposed\u0000to indicate the effectiveness of the approach. This evaluates performance\u0000comparisons between native double-precision implementations against their\u0000emulated double-precision approaches with respect to rendering performance and\u0000accuracy. This study provides insight into the benefits of using native\u0000double-precision in graphical applications, denoting limitations and problems\u0000with emulated double-precision usages. These results improve the general\u0000understanding of the precision involved in graphical visualizations and assist\u0000developers in making decisions about which precision methods to use during\u0000their applications.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"40 2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Smooth Vector Graphics: Modeling Gradient Meshes and Curve-based Approaches Jointly as Poisson Problem 统一平滑矢量图形:将梯度网格和基于曲线的方法联合建模为泊松问题
Pub Date : 2024-08-17 DOI: arxiv-2408.09211
Xingze Tian, Tobias Günther
Research on smooth vector graphics is separated into two independent researchthreads: one on interpolation-based gradient meshes and the other ondiffusion-based curve formulations. With this paper, we propose a mathematicalformulation that unifies gradient meshes and curve-based approaches as solutionto a Poisson problem. To combine these two well-known representations, we firstgenerate a non-overlapping intermediate patch representation that specifies foreach patch a target Laplacian and boundary conditions. Unifying the treatmentof boundary conditions adds further artistic degrees of freedoms to theexisting formulations, such as Neumann conditions on diffusion curves. Tosynthesize a raster image for a given output resolution, we then rasterizeboundary conditions and Laplacians for the respective patches and compute thefinal image as solution to a Poisson problem. We evaluate the method on varioustest scenes containing gradient meshes and curve-based primitives. Since ourmathematical formulation works with established smooth vector graphicsprimitives on the front-end, it is compatible with existing content creationpipelines and with established editing tools. Rather than continuing twoseparate research paths, we hope that a unification of the formulations willlead to new rasterization and vectorization tools in the future that utilizethe strengths of both approaches.
关于平滑矢量图形的研究分为两个独立的研究方向:一个是基于插值的梯度网格,另一个是基于扩散的曲线公式。在本文中,我们提出了一种将梯度网格和基于曲线的方法统一起来的数学公式,作为泊松问题的解决方案。为了将这两种众所周知的表征方法结合起来,我们首先生成了一种非重叠中间补丁表征方法,为每个补丁指定了目标拉普拉斯和边界条件。边界条件的统一处理为现有公式增加了更多的艺术自由度,例如扩散曲线上的诺伊曼条件。为了合成给定输出分辨率的栅格图像,我们先将边界条件和各自斑块的拉普拉斯栅格化,然后将最终图像作为泊松问题的解进行计算。我们对包含梯度网格和曲线基元的各种测试场景进行了评估。由于我们的数学公式与前端已有的平滑矢量图形基元配合使用,因此与现有的内容创建流水线和编辑工具兼容。我们希望,与其继续走两条不同的研究道路,不如将两种方法统一起来,从而在未来开发出新的光栅化和矢量化工具,充分利用两种方法的优势。
{"title":"Unified Smooth Vector Graphics: Modeling Gradient Meshes and Curve-based Approaches Jointly as Poisson Problem","authors":"Xingze Tian, Tobias Günther","doi":"arxiv-2408.09211","DOIUrl":"https://doi.org/arxiv-2408.09211","url":null,"abstract":"Research on smooth vector graphics is separated into two independent research\u0000threads: one on interpolation-based gradient meshes and the other on\u0000diffusion-based curve formulations. With this paper, we propose a mathematical\u0000formulation that unifies gradient meshes and curve-based approaches as solution\u0000to a Poisson problem. To combine these two well-known representations, we first\u0000generate a non-overlapping intermediate patch representation that specifies for\u0000each patch a target Laplacian and boundary conditions. Unifying the treatment\u0000of boundary conditions adds further artistic degrees of freedoms to the\u0000existing formulations, such as Neumann conditions on diffusion curves. To\u0000synthesize a raster image for a given output resolution, we then rasterize\u0000boundary conditions and Laplacians for the respective patches and compute the\u0000final image as solution to a Poisson problem. We evaluate the method on various\u0000test scenes containing gradient meshes and curve-based primitives. Since our\u0000mathematical formulation works with established smooth vector graphics\u0000primitives on the front-end, it is compatible with existing content creation\u0000pipelines and with established editing tools. Rather than continuing two\u0000separate research paths, we hope that a unification of the formulations will\u0000lead to new rasterization and vectorization tools in the future that utilize\u0000the strengths of both approaches.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Localized Evaluation for Constructing Discrete Vector Fields 构建离散矢量场的局部评估
Pub Date : 2024-08-08 DOI: arxiv-2408.04769
Tanner Finken, Julien Tierny, Joshua A Levine
Topological abstractions offer a method to summarize the behavior of vectorfields but computing them robustly can be challenging due to numericalprecision issues. One alternative is to represent the vector field using adiscrete approach, which constructs a collection of pairs of simplices in theinput mesh that satisfies criteria introduced by Forman's discrete Morsetheory. While numerous approaches exist to compute pairs in the restricted caseof the gradient of a scalar field, state-of-the-art algorithms for the generalcase of vector fields require expensive optimization procedures. This paperintroduces a fast, novel approach for pairing simplices of two-dimensional,triangulated vector fields that do not vary in time. The key insight of ourapproach is that we can employ a local evaluation, inspired by the approachused to construct a discrete gradient field, where every simplex in a mesh isconsidered by no more than one of its vertices. Specifically, we observe thatfor any edge in the input mesh, we can uniquely assign an outward direction offlow. We can further expand this consistent notion of outward flow at eachvertex, which corresponds to the concept of a downhill flow in the case ofscalar fields. Working with outward flow enables a linear-time algorithm thatprocesses the (outward) neighborhoods of each vertex one-by-one, similar to theapproach used for scalar fields. We couple our approach to constructingdiscrete vector fields with a method to extract, simplify, and visualizetopological features. Empirical results on analytic and simulation datademonstrate drastic improvements in running time, produce features similar tothe current state-of-the-art, and show the application of simplification tolarge, complex flows.
拓扑抽象提供了一种概括矢量场行为的方法,但由于数值精度问题,要稳健地计算它们可能具有挑战性。一种替代方法是使用离散方法来表示矢量场,即在输入网格中构建满足福曼离散摩斯理论标准的简约对集合。虽然在标量场梯度的受限情况下,有许多方法可以计算对,但针对一般矢量场情况的最新算法需要昂贵的优化程序。本文介绍了一种快速、新颖的方法,用于配对不随时间变化的二维三角矢量场的简约。我们的方法的关键之处在于,我们可以采用局部评估,这种方法的灵感来自于构建离散梯度场的方法,在这种方法中,网格中的每个单纯形都只考虑其一个顶点。具体来说,我们可以观察到,对于输入网格中的任何一条边,我们都可以唯一地指定一个向外的流向。我们可以进一步扩展每个顶点的外向流这一一致的概念,它与标量场中的下坡流概念相对应。使用外向流可以用线性时间算法逐一处理每个顶点的(外向)邻域,这与标量场所用的方法类似。我们将构建离散向量场的方法与提取、简化和可视化拓扑特征的方法相结合。对分析和模拟数据的实证结果表明,运行时间大幅缩短,产生的特征与当前最先进的方法类似,并显示了简化方法在大型复杂流中的应用。
{"title":"Localized Evaluation for Constructing Discrete Vector Fields","authors":"Tanner Finken, Julien Tierny, Joshua A Levine","doi":"arxiv-2408.04769","DOIUrl":"https://doi.org/arxiv-2408.04769","url":null,"abstract":"Topological abstractions offer a method to summarize the behavior of vector\u0000fields but computing them robustly can be challenging due to numerical\u0000precision issues. One alternative is to represent the vector field using a\u0000discrete approach, which constructs a collection of pairs of simplices in the\u0000input mesh that satisfies criteria introduced by Forman's discrete Morse\u0000theory. While numerous approaches exist to compute pairs in the restricted case\u0000of the gradient of a scalar field, state-of-the-art algorithms for the general\u0000case of vector fields require expensive optimization procedures. This paper\u0000introduces a fast, novel approach for pairing simplices of two-dimensional,\u0000triangulated vector fields that do not vary in time. The key insight of our\u0000approach is that we can employ a local evaluation, inspired by the approach\u0000used to construct a discrete gradient field, where every simplex in a mesh is\u0000considered by no more than one of its vertices. Specifically, we observe that\u0000for any edge in the input mesh, we can uniquely assign an outward direction of\u0000flow. We can further expand this consistent notion of outward flow at each\u0000vertex, which corresponds to the concept of a downhill flow in the case of\u0000scalar fields. Working with outward flow enables a linear-time algorithm that\u0000processes the (outward) neighborhoods of each vertex one-by-one, similar to the\u0000approach used for scalar fields. We couple our approach to constructing\u0000discrete vector fields with a method to extract, simplify, and visualize\u0000topological features. Empirical results on analytic and simulation data\u0000demonstrate drastic improvements in running time, produce features similar to\u0000the current state-of-the-art, and show the application of simplification to\u0000large, complex flows.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"71 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141932615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1