首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
Region-Aware Sparse Attention Network for Lane Detection 基于区域感知的稀疏注意网络的车道检测
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-11 DOI: 10.1111/cgf.70246
Yan Deng, Guoqiang Xiao

Lane detection is a fundamental task in intelligent driving systems. However, the slender and sparse structure of lanes, combined with the dominance of irrelevant background regions in road scenes, makes accurate lane localization particularly challenging, especially under complex and adverse conditions. To address these issues, we propose a novel Region-Aware Sparse Attention Network (RSANet), which is designed to selectively enhance lane-relevant features while suppressing background interference. Specifically, we introduce the Region-guided Pooling Predictor (RPP) that generates lane region activation maps to guide the backbone network in focusing on informative areas. To improve the multi-scale feature fusion capability of the Feature Pyramid Network (FPN), we propose the Bilateral Pooling Attention Module (BPAM) that captures discriminative features by jointly modeling dependencies along both the channel and spatial dimensions. Furthermore, the Lane-guided Sparse Attention Mechanism (LSAM) efficiently aggregates global contextual information from the most relevant spatial regions to reinforce lane prior representations while significantly reducing redundant computation. Extensive experiments on benchmark datasets demonstrate that RSANet outperforms state-of-the-art methods in a variety of challenging scenarios. Notably, RSANet achieves an F1@50 score of 80.04% on the CULane dataset that shows notable improvements.

车道检测是智能驾驶系统的一项基本任务。然而,车道细长稀疏的结构,加上道路场景中不相关背景区域的主导地位,使得准确的车道定位尤其具有挑战性,特别是在复杂和不利的条件下。为了解决这些问题,我们提出了一种新的区域感知稀疏注意网络(RSANet),该网络旨在选择性地增强车道相关特征,同时抑制背景干扰。具体来说,我们引入了区域引导池化预测器(RPP),该预测器生成车道区域激活图,以指导骨干网关注信息区域。为了提高特征金字塔网络(FPN)的多尺度特征融合能力,我们提出了双边池化注意模块(BPAM),该模块通过沿通道和空间维度共同建模依赖关系来捕获判别特征。此外,车道引导稀疏注意机制(LSAM)有效地聚合了最相关空间区域的全局上下文信息,增强了车道先验表示,同时显著减少了冗余计算。在基准数据集上进行的大量实验表明,RSANet在各种具有挑战性的场景中优于最先进的方法。值得注意的是,RSANet在CULane数据集上获得了80.04%的F1@50分数,显示出显著的改进。
{"title":"Region-Aware Sparse Attention Network for Lane Detection","authors":"Yan Deng,&nbsp;Guoqiang Xiao","doi":"10.1111/cgf.70246","DOIUrl":"https://doi.org/10.1111/cgf.70246","url":null,"abstract":"<p>Lane detection is a fundamental task in intelligent driving systems. However, the slender and sparse structure of lanes, combined with the dominance of irrelevant background regions in road scenes, makes accurate lane localization particularly challenging, especially under complex and adverse conditions. To address these issues, we propose a novel Region-Aware Sparse Attention Network (RSANet), which is designed to selectively enhance lane-relevant features while suppressing background interference. Specifically, we introduce the Region-guided Pooling Predictor (RPP) that generates lane region activation maps to guide the backbone network in focusing on informative areas. To improve the multi-scale feature fusion capability of the Feature Pyramid Network (FPN), we propose the Bilateral Pooling Attention Module (BPAM) that captures discriminative features by jointly modeling dependencies along both the channel and spatial dimensions. Furthermore, the Lane-guided Sparse Attention Mechanism (LSAM) efficiently aggregates global contextual information from the most relevant spatial regions to reinforce lane prior representations while significantly reducing redundant computation. Extensive experiments on benchmark datasets demonstrate that RSANet outperforms state-of-the-art methods in a variety of challenging scenarios. Notably, RSANet achieves an F1@50 score of 80.04% on the CULane dataset that shows notable improvements.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BoxFusion: Reconstruction-Free Open-Vocabulary 3D Object Detection via Real-Time Multi-View Box Fusion BoxFusion:通过实时多视图盒融合的无重建开放词汇3D目标检测
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-11 DOI: 10.1111/cgf.70254
Yuqing Lan, Chenyang Zhu, Zhirui Gao, Jiazhao Zhang, Yihan Cao, Renjiao Yi, Yijie Wang, Kai Xu

Open-vocabulary 3D object detection has gained significant interest due to its critical applications in autonomous driving and embodied AI. Existing detection methods, whether offline or online, typically rely on dense point cloud reconstruction, which imposes substantial computational overhead and memory constraints, hindering real-time deployment in downstream tasks. To address this, we propose a novel reconstruction-free online framework tailored for memory-efficient and real-time 3D detection. Specifically, given streaming posed RGB-D video input, we leverage Cubify Anything as a pre-trained visual foundation model (VFM) for single-view 3D object detection, coupled with CLIP to capture open-vocabulary semantics of detected objects. To fuse all detected bounding boxes across different views into a unified one, we employ an association module for correspondences of multi-views and an optimization module to fuse the 3D bounding boxes of the same instance. The association module utilizes 3D Non-Maximum Suppression (NMS) and a box correspondence matching module. The optimization module uses an IoU-guided efficient random optimization technique based on particle filtering to enforce multi-view consistency of the 3D bounding boxes while minimizing computational complexity. Extensive experiments on CA-1M and ScanNetV2 datasets demonstrate that our method achieves state-of-the-art performance among online methods. Benefiting from this novel reconstruction-free paradigm for 3D object detection, our method exhibits great generalization abilities in various scenarios, enabling real-time perception even in environments exceeding 1000 square meters.

开放词汇3D目标检测由于其在自动驾驶和嵌入式人工智能中的关键应用而引起了人们的极大兴趣。现有的检测方法,无论是离线还是在线,通常依赖于密集的点云重建,这增加了大量的计算开销和内存限制,阻碍了下游任务的实时部署。为了解决这个问题,我们提出了一种新的无重建在线框架,为内存效率和实时3D检测量身定制。具体来说,给定流化的RGB-D视频输入,我们利用Cubify Anything作为单视图3D物体检测的预训练视觉基础模型(VFM),再加上CLIP来捕获被检测物体的开放词汇语义。为了将不同视图中检测到的所有边界框融合为一个统一的边界框,我们使用了多视图对应的关联模块和同一实例的三维边界框的优化模块。关联模块采用3D非最大抑制(NMS)和框对应匹配模块。优化模块采用基于粒子滤波的iu引导高效随机优化技术,在最小化计算复杂度的同时,增强了3D边界框的多视图一致性。在CA-1M和ScanNetV2数据集上的大量实验表明,我们的方法在在线方法中达到了最先进的性能。得益于这种新颖的无需重建的3D物体检测范例,我们的方法在各种场景中都表现出很强的泛化能力,即使在超过1000平方米的环境中也能实现实时感知。
{"title":"BoxFusion: Reconstruction-Free Open-Vocabulary 3D Object Detection via Real-Time Multi-View Box Fusion","authors":"Yuqing Lan,&nbsp;Chenyang Zhu,&nbsp;Zhirui Gao,&nbsp;Jiazhao Zhang,&nbsp;Yihan Cao,&nbsp;Renjiao Yi,&nbsp;Yijie Wang,&nbsp;Kai Xu","doi":"10.1111/cgf.70254","DOIUrl":"https://doi.org/10.1111/cgf.70254","url":null,"abstract":"<p>Open-vocabulary 3D object detection has gained significant interest due to its critical applications in autonomous driving and embodied AI. Existing detection methods, whether offline or online, typically rely on dense point cloud reconstruction, which imposes substantial computational overhead and memory constraints, hindering real-time deployment in downstream tasks. To address this, we propose a novel reconstruction-free online framework tailored for memory-efficient and real-time 3D detection. Specifically, given streaming posed RGB-D video input, we leverage Cubify Anything as a pre-trained visual foundation model (VFM) for single-view 3D object detection, coupled with CLIP to capture open-vocabulary semantics of detected objects. To fuse all detected bounding boxes across different views into a unified one, we employ an association module for correspondences of multi-views and an optimization module to fuse the 3D bounding boxes of the same instance. The association module utilizes 3D Non-Maximum Suppression (NMS) and a box correspondence matching module. The optimization module uses an IoU-guided efficient random optimization technique based on particle filtering to enforce multi-view consistency of the 3D bounding boxes while minimizing computational complexity. Extensive experiments on CA-1M and ScanNetV2 datasets demonstrate that our method achieves state-of-the-art performance among online methods. Benefiting from this novel reconstruction-free paradigm for 3D object detection, our method exhibits great generalization abilities in various scenarios, enabling real-time perception even in environments exceeding 1000 square meters.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
View-Independent Wire Art Modeling via Manifold Fitting 通过流形拟合的视图独立线艺术建模
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-11 DOI: 10.1111/cgf.70247
HuiGuang Huang, Dong-Yi Wu, Yulin Wang, Yu Cao, Tong-Yee Lee

This paper presents a novel fully automated method for generating view-independent abstract wire art from 3D models. The main challenge in creating line art is to strike a balance among abstraction, structural clarity, 3D perception, and consistent aesthetics from different viewpoints. Many existing approaches have been proposed, including extracting wire art from mesh, reconstructing it from pictures, etc. But they all suffer from the fact that the wires are usually very unorganized and cumbersome and usually can only guarantee the observation effect of specific viewpoints. To overcome these problems, we propose a paradigm shift: instead of predicting the line segments directly, we consider the generation of wire art as an optimization-driven manifold-fitting problem. Thus we can abstract/generalize the 3D model while retaining the key properties necessary for appealing line art, including structural topology and connectivity, and maintain the three-dimensionality of the line art with a multi-perspective view. Experimental results show that our view-independent method outperforms previous methods in terms of line simplicity, shape fidelity, and visual consistency.

本文提出了一种新的全自动化方法,用于从三维模型中生成与视图无关的抽象线艺术。创造线条艺术的主要挑战是在抽象,结构清晰度,3D感知和从不同角度一致的美学之间取得平衡。现有的方法包括从网格中提取线画、从图片中重建线画等。但它们都有一个缺点,那就是电线通常非常杂乱和笨重,通常只能保证特定视点的观察效果。为了克服这些问题,我们提出了一种范式转变:我们不直接预测线段,而是将线段艺术的生成视为优化驱动的歧管装配问题。因此,我们可以抽象/概括3D模型,同时保留吸引线条艺术所需的关键属性,包括结构拓扑和连通性,并以多视角保持线条艺术的三维性。实验结果表明,我们的方法在线条简洁性、形状保真度和视觉一致性方面优于以往的方法。
{"title":"View-Independent Wire Art Modeling via Manifold Fitting","authors":"HuiGuang Huang,&nbsp;Dong-Yi Wu,&nbsp;Yulin Wang,&nbsp;Yu Cao,&nbsp;Tong-Yee Lee","doi":"10.1111/cgf.70247","DOIUrl":"https://doi.org/10.1111/cgf.70247","url":null,"abstract":"<p>This paper presents a novel fully automated method for generating view-independent abstract wire art from 3D models. The main challenge in creating line art is to strike a balance among abstraction, structural clarity, 3D perception, and consistent aesthetics from different viewpoints. Many existing approaches have been proposed, including extracting wire art from mesh, reconstructing it from pictures, etc. But they all suffer from the fact that the wires are usually very unorganized and cumbersome and usually can only guarantee the observation effect of specific viewpoints. To overcome these problems, we propose a paradigm shift: instead of predicting the line segments directly, we consider the generation of wire art as an optimization-driven manifold-fitting problem. Thus we can abstract/generalize the 3D model while retaining the key properties necessary for appealing line art, including structural topology and connectivity, and maintain the three-dimensionality of the line art with a multi-perspective view. Experimental results show that our view-independent method outperforms previous methods in terms of line simplicity, shape fidelity, and visual consistency.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introducing Unbiased Depth into 2D Gaussian Splatting for High-accuracy Surface Reconstruction 在二维高斯溅射中引入无偏深度以实现高精度表面重建
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-11 DOI: 10.1111/cgf.70252
Yixin Yang, Yang Zhou, Hui Huang

Recently, 2D Gaussian Splatting (2DGS) has demonstrated superior geometry reconstruction quality than the popular 3DGS by using 2D surfels to approximate thin surfaces. However, it falls short when dealing with glossy surfaces, resulting in visible holes in these areas. We find that the reflection discontinuity causes the issue. To fit the jump from diffuse to specular reflection at different viewing angles, depth bias is introduced in the optimized Gaussian primitives. To address that, we first replace the depth distortion loss in 2DGS with a novel depth convergence loss, which imposes a strong constraint on depth continuity. Then, we rectify the depth criterion in determining the actual surface, which fully accounts for all the intersecting Gaussians along the ray. Qualitative and quantitative evaluations across various datasets reveal that our method significantly improves reconstruction quality, with more complete and accurate surfaces than 2DGS. Code is available at https://github.com/XiaoXinyyx/Unbiased_Surfel.

近年来,二维高斯溅射(2DGS)通过使用二维曲面来近似薄表面,显示出比流行的3DGS更好的几何重建质量。然而,当处理有光泽的表面时,它就会失效,导致这些区域出现可见的孔。我们发现是反射不连续引起了这个问题。为了适应不同视角下从漫反射到镜面反射的跳跃,在优化的高斯原语中引入了深度偏差。为了解决这个问题,我们首先用一种新的深度收敛损失取代了2DGS中的深度失真损失,这种深度收敛损失对深度连续性施加了很强的约束。然后,我们在确定实际表面时对深度准则进行了修正,使其充分考虑了沿射线方向的所有相交高斯函数。对不同数据集的定性和定量评估表明,我们的方法显著提高了重建质量,获得了比2DGS更完整和准确的表面。代码可从https://github.com/XiaoXinyyx/Unbiased_Surfel获得。
{"title":"Introducing Unbiased Depth into 2D Gaussian Splatting for High-accuracy Surface Reconstruction","authors":"Yixin Yang,&nbsp;Yang Zhou,&nbsp;Hui Huang","doi":"10.1111/cgf.70252","DOIUrl":"https://doi.org/10.1111/cgf.70252","url":null,"abstract":"<p>Recently, 2D Gaussian Splatting (2DGS) has demonstrated superior geometry reconstruction quality than the popular 3DGS by using 2D surfels to approximate thin surfaces. However, it falls short when dealing with glossy surfaces, resulting in visible holes in these areas. We find that the reflection discontinuity causes the issue. To fit the jump from diffuse to specular reflection at different viewing angles, depth bias is introduced in the optimized Gaussian primitives. To address that, we first replace the depth distortion loss in 2DGS with a novel depth convergence loss, which imposes a strong constraint on depth continuity. Then, we rectify the depth criterion in determining the actual surface, which fully accounts for all the intersecting Gaussians along the ray. Qualitative and quantitative evaluations across various datasets reveal that our method significantly improves reconstruction quality, with more complete and accurate surfaces than 2DGS. Code is available at https://github.com/XiaoXinyyx/Unbiased_Surfel.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GNF: Gaussian Neural Fields for Multidimensional Signal Representation and Reconstruction 用于多维信号表示和重建的高斯神经场
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-11 DOI: 10.1111/cgf.70232
Abelaziz Bouzidi, Hamid Laga, Hazem Wannous, Ferdous Sohel

Neural fields have emerged as a powerful framework for representing continuous multidimensional signals such as images and videos, 3D and 4D objects and scenes, and radiance fields. While efficient, achieving high-quality representation requires the use of wide and deep neural networks. These, however, are slow to train and evaluate. Although several acceleration techniques have been proposed, they either trade memory for faster training and/or inference, rely on thousands of fitted primitives with considerable optimization time, or compromise the smooth, continuous nature of neural fields. In this paper, we introduce Gaussian Neural Fields (GNF), a novel compact neural decoder that maps learned feature grids into continuous non-linear signals, such as RGB images, Signed Distance Functions (SDFs), and radiance fields, using a single compact layer of Gaussian kernels defined in a high-dimensional feature space. Our key observation is that neurons in traditional MLPs perform simple computations, usually a dot product followed by an activation function, necessitating wide and deep MLPs or high-resolution feature grids to model complex functions. In this paper, we show that replacing MLP-based decoders with Gaussian kernels whose centers are learned features yields highly accurate representations of 2D (RGB), 3D (geometry), and 5D (radiance fields) signals with just a single layer of such kernels. This representation is highly parallelizable, operates on low-resolution grids, and trains in under 15 seconds for 3D geometry and under 11 minutes for view synthesis. GNF matches the accuracy of deep MLP-based decoders with far fewer parameters and significantly higher inference throughput. The source code is publicly available at https://grbfnet.github.io/.

神经场已经成为一个强大的框架,用于表示连续的多维信号,如图像和视频,3D和4D对象和场景,以及辐射场。虽然高效,但实现高质量的表示需要使用广泛和深度的神经网络。然而,这些训练和评估是缓慢的。虽然已经提出了几种加速技术,但它们要么用内存换取更快的训练和/或推理,要么依赖于成千上万的拟合原语,需要相当长的优化时间,要么损害神经场的平滑、连续性质。在本文中,我们介绍了高斯神经场(GNF),这是一种新型的紧凑神经解码器,它使用在高维特征空间中定义的高斯核的单个紧凑层,将学习到的特征网格映射到连续的非线性信号中,如RGB图像、有符号距离函数(sdf)和辐射场。我们的关键观察是,传统mlp中的神经元执行简单的计算,通常是点积然后是激活函数,这就需要宽而深的mlp或高分辨率的特征网格来模拟复杂的函数。在本文中,我们展示了用高斯核替换基于mlp的解码器,高斯核的中心是学习特征,只需要一层这样的核,就可以产生2D (RGB)、3D(几何)和5D(辐射场)信号的高度精确表示。这种表示具有高度并行性,可在低分辨率网格上运行,并且在15秒内训练3D几何图形,在11分钟内进行视图合成。GNF以更少的参数和更高的推理吞吐量匹配基于深度mlp的解码器的精度。源代码可在https://grbfnet.github.io/上公开获得。
{"title":"GNF: Gaussian Neural Fields for Multidimensional Signal Representation and Reconstruction","authors":"Abelaziz Bouzidi,&nbsp;Hamid Laga,&nbsp;Hazem Wannous,&nbsp;Ferdous Sohel","doi":"10.1111/cgf.70232","DOIUrl":"https://doi.org/10.1111/cgf.70232","url":null,"abstract":"<p>Neural fields have emerged as a powerful framework for representing continuous multidimensional signals such as images and videos, 3D and 4D objects and scenes, and radiance fields. While efficient, achieving high-quality representation requires the use of wide and deep neural networks. These, however, are slow to train and evaluate. Although several acceleration techniques have been proposed, they either trade memory for faster training and/or inference, rely on thousands of fitted primitives with considerable optimization time, or compromise the smooth, continuous nature of neural fields. In this paper, we introduce Gaussian Neural Fields (GNF), a novel compact neural decoder that maps learned feature grids into continuous non-linear signals, such as RGB images, Signed Distance Functions (SDFs), and radiance fields, using a single compact layer of Gaussian kernels defined in a high-dimensional feature space. Our key observation is that neurons in traditional MLPs perform simple computations, usually a dot product followed by an activation function, necessitating wide and deep MLPs or high-resolution feature grids to model complex functions. In this paper, we show that replacing MLP-based decoders with Gaussian kernels whose centers are learned features yields highly accurate representations of 2D (RGB), 3D (geometry), and 5D (radiance fields) signals with just a single layer of such kernels. This representation is highly parallelizable, operates on low-resolution grids, and trains in under 15 seconds for 3D geometry and under 11 minutes for view synthesis. GNF matches the accuracy of deep MLP-based decoders with far fewer parameters and significantly higher inference throughput. The source code is publicly available at https://grbfnet.github.io/.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preconditioned Deformation Grids 预置变形网格
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-11 DOI: 10.1111/cgf.70269
Julian Kaltheuner, Alexander Oebel, Hannah Droege, Patrick Stotko, Reinhard Klein

Dynamic surface reconstruction of objects from point cloud sequences is a challenging field in computer graphics. Existing approaches either require multiple regularization terms or extensive training data which, however, lead to compromises in reconstruction accuracy as well as over-smoothing or poor generalization to unseen objects and motions. To address these limitations, we introduce Preconditioned Deformation Grids, a novel technique for estimating coherent deformation fields directly from unstructured point cloud sequences without requiring or forming explicit correspondences. Key to our approach is the use of multi-resolution voxel grids that capture the overall motion at varying spatial scales, enabling a more flexible deformation representation. In conjunction with incorporating grid-based Sobolev preconditioning into gradient-based optimization, we show that applying a Chamfer loss between the input point clouds as well as to an evolving template mesh is sufficient to obtain accurate deformations. To ensure temporal consistency along the object surface, we include a weak isometry loss on mesh edges which complements the main objective without constraining deformation fidelity. Extensive evaluations demonstrate that our method achieves superior results, particularly for long sequences, compared to state-of-the-art techniques.

从点云序列中动态重建物体表面是计算机图形学中一个具有挑战性的领域。现有的方法要么需要多个正则化项,要么需要大量的训练数据,然而,这导致重建精度的妥协,以及对不可见的物体和运动的过度平滑或较差的泛化。为了解决这些限制,我们引入了预置变形网格,这是一种新的技术,可以直接从非结构化点云序列中估计相干变形场,而不需要或形成显式对应。我们方法的关键是使用多分辨率体素网格,在不同的空间尺度上捕捉整体运动,从而实现更灵活的变形表示。结合将基于网格的Sobolev预处理整合到基于梯度的优化中,我们表明,在输入点云之间以及对不断发展的模板网格应用倒角损失足以获得准确的变形。为了确保物体表面的时间一致性,我们在网格边缘上包含了一个弱等距损失,它在不限制变形保真度的情况下补充了主要目标。广泛的评估表明,与最先进的技术相比,我们的方法取得了卓越的结果,特别是对于长序列。
{"title":"Preconditioned Deformation Grids","authors":"Julian Kaltheuner,&nbsp;Alexander Oebel,&nbsp;Hannah Droege,&nbsp;Patrick Stotko,&nbsp;Reinhard Klein","doi":"10.1111/cgf.70269","DOIUrl":"https://doi.org/10.1111/cgf.70269","url":null,"abstract":"<p>Dynamic surface reconstruction of objects from point cloud sequences is a challenging field in computer graphics. Existing approaches either require multiple regularization terms or extensive training data which, however, lead to compromises in reconstruction accuracy as well as over-smoothing or poor generalization to unseen objects and motions. To address these limitations, we introduce <i>Preconditioned Deformation Grids</i>, a novel technique for estimating coherent deformation fields directly from unstructured point cloud sequences without requiring or forming explicit correspondences. Key to our approach is the use of multi-resolution voxel grids that capture the overall motion at varying spatial scales, enabling a more flexible deformation representation. In conjunction with incorporating grid-based Sobolev preconditioning into gradient-based optimization, we show that applying a Chamfer loss between the input point clouds as well as to an evolving template mesh is sufficient to obtain accurate deformations. To ensure temporal consistency along the object surface, we include a weak isometry loss on mesh edges which complements the main objective without constraining deformation fidelity. Extensive evaluations demonstrate that our method achieves superior results, particularly for long sequences, compared to state-of-the-art techniques.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70269","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Solver-Aided Hierarchical Language for LLM-Driven CAD Design 一种求解器辅助的llm驱动CAD设计层次语言
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-11 DOI: 10.1111/cgf.70250
B. T. Jones, Z. Zhang, F. Hähnlein, W. Matusik, M. Ahmad, V. Kim, A. Schulz

Parametric CAD systems use domain-specific languages (DSLs) to represent geometry as programs, enabling both flexible modeling and structured editing. With the rise of large language models (LLMs), there is growing interest in generating such programs from natural language. This raises a key question: what kind of DSL best supports both CAD generation and editing, whether performed by a human or an AI? In this work, we introduce AIDL, a hierarchical, solver-aided DSL designed to align with the strengths of LLMs while remaining interpretable and editable by humans. AIDL enables high-level reasoning by breaking problems into abstract components and structural relationships, while offloading low-level geometric reasoning to a constraint solver. We evaluate AIDL in a 2D text-to-CAD setting using a zero-shot prompt-based interface and compare it to OpenSCAD, a widely used CAD DSL that appears in LLM training data. AIDL produces results that are visually competitive and significantly easier to edit. Our findings suggest that language design is a powerful complement to model training and prompt engineering for building collaborative AI–human tools in CAD. Code is available at https://github.com/deGravity/aidl.

参数化CAD系统使用领域特定语言(dsl)将几何图形表示为程序,从而实现灵活的建模和结构化编辑。随着大型语言模型(llm)的兴起,人们对从自然语言生成这样的程序越来越感兴趣。这就提出了一个关键问题:什么样的DSL最能支持CAD生成和编辑,是由人类还是人工智能执行?在这项工作中,我们引入了AIDL,这是一种分层的,求解器辅助的DSL,旨在与llm的优势保持一致,同时保持人类的可解释性和可编辑性。AIDL通过将问题分解为抽象组件和结构关系来实现高级推理,同时将低级几何推理卸载给约束求解器。我们使用基于零射击提示的界面在2D文本到CAD设置中评估AIDL,并将其与OpenSCAD进行比较,OpenSCAD是一种广泛使用的CAD DSL,出现在LLM训练数据中。AIDL产生的结果在视觉上具有竞争力,并且更容易编辑。我们的研究结果表明,语言设计是对模型训练和在CAD中构建协作AI-human工具的提示工程的有力补充。代码可从https://github.com/deGravity/aidl获得。
{"title":"A Solver-Aided Hierarchical Language for LLM-Driven CAD Design","authors":"B. T. Jones,&nbsp;Z. Zhang,&nbsp;F. Hähnlein,&nbsp;W. Matusik,&nbsp;M. Ahmad,&nbsp;V. Kim,&nbsp;A. Schulz","doi":"10.1111/cgf.70250","DOIUrl":"https://doi.org/10.1111/cgf.70250","url":null,"abstract":"<p>Parametric CAD systems use domain-specific languages (DSLs) to represent geometry as programs, enabling both flexible modeling and structured editing. With the rise of large language models (LLMs), there is growing interest in generating such programs from natural language. This raises a key question: what kind of DSL best supports both CAD generation and editing, whether performed by a human or an AI? In this work, we introduce AIDL, a hierarchical, solver-aided DSL designed to align with the strengths of LLMs while remaining interpretable and editable by humans. AIDL enables high-level reasoning by breaking problems into abstract components and structural relationships, while offloading low-level geometric reasoning to a constraint solver. We evaluate AIDL in a 2D text-to-CAD setting using a zero-shot prompt-based interface and compare it to OpenSCAD, a widely used CAD DSL that appears in LLM training data. AIDL produces results that are visually competitive and significantly easier to edit. Our findings suggest that language design is a powerful complement to model training and prompt engineering for building collaborative AI–human tools in CAD. Code is available at https://github.com/deGravity/aidl.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EmoDiffGes: Emotion-Aware Co-Speech Holistic Gesture Generation with Progressive Synergistic Diffusion EmoDiffGes:具有渐进协同扩散的情绪感知协同语音整体手势生成
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-11 DOI: 10.1111/cgf.70261
Xinru Li, Jingzhong Lin, Bohao Zhang, Yuanyuan Qi, Changbo Wang, Gaoqi He

Co-speech gesture generation, driven by emotional expression and synergistic bodily movements, is essential for applications such as virtual avatars and human-robot interaction. Existing co-speech gesture generation methods face two fundamental limitations: (1) producing inexpressive gestures due to ignoring the temporal evolution of emotion; (2) generating incoherent and unnatural motions as a result of either holistic body oversimplification or independent part modeling. To address the above limitations, we propose EmoDiffGes, a diffusion-based framework grounded in embodied emotion theory, unifying dynamic emotion conditioning and part-aware synergistic modeling. Specifically, a Dynamic Emotion-Alignment Module (DEAM) is first applied to extract dynamic emotional cues and inject emotion guidance into the generation process. Then, a Progressive Synergistic Gesture Generator (PSGG) iteratively refines region-specific latent codes while maintaining full-body coordination, leveraging a Body Region Prior for part-specific encoding and Progressive Inter-Region Synergistic Flow for global motion coherence. Extensive experiments validate the effectiveness of our methods, showcasing the potential for generating expressive, coordinated, and emotionally grounded human gestures.

由情感表达和协同身体运动驱动的协同语音手势生成对于虚拟化身和人机交互等应用至关重要。现有的共语音手势生成方法面临两个根本性的局限性:(1)由于忽略了情感的时间演变而产生缺乏表达的手势;(2)由于整体过于简化或独立的部分建模而产生不连贯和不自然的运动。为了解决上述局限性,我们提出了EmoDiffGes,这是一个基于扩散的框架,基于具身情绪理论,统一了动态情绪调节和部分感知协同建模。具体而言,首先应用动态情绪定位模块(Dynamic emotion - alignment Module, DEAM)提取动态情绪线索,并在生成过程中注入情绪引导。然后,渐进协同手势生成器(PSGG)在保持全身协调的同时迭代地细化区域特定的潜在代码,利用身体区域先验进行部分特定编码,利用渐进区域间协同流进行全局运动一致性。大量的实验验证了我们方法的有效性,展示了产生富有表现力、协调性和情感基础的人类手势的潜力。
{"title":"EmoDiffGes: Emotion-Aware Co-Speech Holistic Gesture Generation with Progressive Synergistic Diffusion","authors":"Xinru Li,&nbsp;Jingzhong Lin,&nbsp;Bohao Zhang,&nbsp;Yuanyuan Qi,&nbsp;Changbo Wang,&nbsp;Gaoqi He","doi":"10.1111/cgf.70261","DOIUrl":"https://doi.org/10.1111/cgf.70261","url":null,"abstract":"<p>Co-speech gesture generation, driven by emotional expression and synergistic bodily movements, is essential for applications such as virtual avatars and human-robot interaction. Existing co-speech gesture generation methods face two fundamental limitations: (1) producing inexpressive gestures due to ignoring the temporal evolution of emotion; (2) generating incoherent and unnatural motions as a result of either holistic body oversimplification or independent part modeling. To address the above limitations, we propose EmoDiffGes, a diffusion-based framework grounded in embodied emotion theory, unifying dynamic emotion conditioning and part-aware synergistic modeling. Specifically, a Dynamic Emotion-Alignment Module (DEAM) is first applied to extract dynamic emotional cues and inject emotion guidance into the generation process. Then, a Progressive Synergistic Gesture Generator (PSGG) iteratively refines region-specific latent codes while maintaining full-body coordination, leveraging a Body Region Prior for part-specific encoding and Progressive Inter-Region Synergistic Flow for global motion coherence. Extensive experiments validate the effectiveness of our methods, showcasing the potential for generating expressive, coordinated, and emotionally grounded human gestures.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ClothingTwin: Reconstructing Inner and Outer Layers of Clothing Using 3D Gaussian Splatting ClothingTwin:利用三维高斯溅射重建衣服的内层和外层
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-11 DOI: 10.1111/cgf.70240
Munkyung Jung, Dohae Lee, In-Kwon Lee

We introduce ClothingTwin, a novel end-to-end framework for reconstructing 3D digital twins of clothing that capture both the outer and inner fabric —without the need for manual mannequin removal. Traditional 2D “ghost mannequin” photography techniques remove the mannequin and composite partial inner textures to create images in which the garment appears as if it were worn by a transparent model. However, extending such method to photorealistic 3D Gaussian Splatting (3DGS) is far more challenging. Achieving consistent inner-layer compositing across the large sets of images used for 3DGS optimization quickly becomes impractical if done manually. To address these issues, ClothingTwin introduces three key innovations. First, a specialized image acquisition protocol captures two sets of images for each garment: one worn normally on the mannequin (outer layer exposed) and one worn inside-out (inner layer exposed). This eliminates the need to painstakingly edit out mannequins in thousands of images and provides full coverage of all fabric surfaces. Second, we employ a mesh-guided 3DGS reconstruction for each layer and leverage Non-Rigid Iterative Closest Point (ICP) to align outer and inner point-clouds despite distinct geometries. Third, our enhanced rendering pipeline—featuring mesh-guided back-face culling, back-to-front alpha blending, and recalculated spherical harmonic angles—ensures photorealistic visualization of the combined outer and inner layers without inter-layer artifacts. Experimental evaluations on various garments show that ClothingTwin outperforms conventional 3DGS-based methods, and our ablation study validates the effectiveness of each proposed component.

我们介绍了ClothingTwin,这是一种新颖的端到端框架,用于重建3D数字服装双胞胎,可以捕获外部和内部织物,而无需手动移除人体模型。传统的2D“幽灵人体模型”摄影技术去除人体模型和复合部分内部纹理,创造出的图像中,衣服看起来就像一个透明的模特穿的一样。然而,将这种方法扩展到逼真的3D高斯喷溅(3DGS)更具挑战性。在用于3DGS优化的大量图像集上实现一致的内层合成,如果手动完成,将很快变得不切实际。为了解决这些问题,ClothingTwin引入了三个关键创新。首先,一个专门的图像采集协议为每件衣服捕获两组图像:一组正常穿在人体模特身上(外层暴露),另一组穿在里面(内层暴露)。这消除了在数千张图像中辛苦编辑人体模型的需要,并提供了所有织物表面的全覆盖。其次,我们对每一层采用网格引导的3DGS重建,并利用非刚性迭代最近点(ICP)来对齐外部和内部点云,尽管几何形状不同。第三,我们增强的渲染管道-具有网格引导的背面剔除,前后alpha混合和重新计算的球面调和角-确保了结合的外层和内层的逼真可视化,没有层间的伪影。对各种服装的实验评估表明,ClothingTwin优于传统的基于3dgs的方法,我们的消融研究验证了每个提议组件的有效性。
{"title":"ClothingTwin: Reconstructing Inner and Outer Layers of Clothing Using 3D Gaussian Splatting","authors":"Munkyung Jung,&nbsp;Dohae Lee,&nbsp;In-Kwon Lee","doi":"10.1111/cgf.70240","DOIUrl":"https://doi.org/10.1111/cgf.70240","url":null,"abstract":"<p>We introduce ClothingTwin, a novel end-to-end framework for reconstructing 3D digital twins of clothing that capture both the outer and inner fabric —without the need for manual mannequin removal. Traditional 2D “ghost mannequin” photography techniques remove the mannequin and composite partial inner textures to create images in which the garment appears as if it were worn by a transparent model. However, extending such method to photorealistic 3D Gaussian Splatting (3DGS) is far more challenging. Achieving consistent inner-layer compositing across the large sets of images used for 3DGS optimization quickly becomes impractical if done manually. To address these issues, ClothingTwin introduces three key innovations. First, a specialized image acquisition protocol captures two sets of images for each garment: one worn normally on the mannequin (outer layer exposed) and one worn inside-out (inner layer exposed). This eliminates the need to painstakingly edit out mannequins in thousands of images and provides full coverage of all fabric surfaces. Second, we employ a mesh-guided 3DGS reconstruction for each layer and leverage Non-Rigid Iterative Closest Point (ICP) to align outer and inner point-clouds despite distinct geometries. Third, our enhanced rendering pipeline—featuring mesh-guided back-face culling, back-to-front alpha blending, and recalculated spherical harmonic angles—ensures photorealistic visualization of the combined outer and inner layers without inter-layer artifacts. Experimental evaluations on various garments show that ClothingTwin outperforms conventional 3DGS-based methods, and our ablation study validates the effectiveness of each proposed component.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70240","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FlowCapX: Physics-Grounded Flow Capture with Long-Term Consistency FlowCapX:具有长期一致性的物理接地流量捕获
IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-11 DOI: 10.1111/cgf.70274
N. Tao, L. Zhang, X. Ni, M. Chu, B. Chen

We present FlowCapX, a physics-enhanced framework for flow reconstruction from sparse video inputs, addressing the challenge of jointly optimizing complex physical constraints and sparse observational data over long time horizons. Existing methods often struggle to capture turbulent motion while maintaining physical consistency, limiting reconstruction quality and downstream tasks. Focusing on velocity inference, our approach introduces a hybrid framework that strategically separates representation and supervision across spatial scales. At the coarse level, we resolve sparse-view ambiguities via a novel optimization strategy that aligns long-term observation with physics-grounded velocity fields. By emphasizing vorticity-based physical constraints, our method enhances physical fidelity and improves optimization stability. At the fine level, we prioritize observational fidelity to preserve critical turbulent structures. Extensive experiments demonstrate state-of-the-art velocity reconstruction, enabling velocity-aware downstream tasks, e.g., accurate flow analysis, scene augmentation with tracer visualization and re-simulation. Our implementation is released at ://github.com/taoningxiao/FlowCapX.git.

我们提出了FlowCapX,这是一个物理增强的框架,用于从稀疏视频输入中重建流,解决了在长时间内共同优化复杂物理约束和稀疏观测数据的挑战。现有的方法往往难以捕捉湍流运动,同时保持物理一致性,限制了重建质量和下游任务。专注于速度推理,我们的方法引入了一个混合框架,在空间尺度上战略性地分离表征和监督。在粗糙的层面上,我们通过一种新的优化策略解决了稀疏视图的模糊性,该策略将长期观测与物理接地速度场相结合。通过强调基于涡度的物理约束,我们的方法提高了物理保真度和优化稳定性。在精细水平上,我们优先考虑观测保真度,以保持临界湍流结构。大量的实验展示了最先进的速度重建,实现了速度感知的下游任务,例如,精确的流量分析,用示踪剂可视化和重新模拟增强场景。我们的实现发布在://github.com/taoningxiao/FlowCapX.git。
{"title":"FlowCapX: Physics-Grounded Flow Capture with Long-Term Consistency","authors":"N. Tao,&nbsp;L. Zhang,&nbsp;X. Ni,&nbsp;M. Chu,&nbsp;B. Chen","doi":"10.1111/cgf.70274","DOIUrl":"https://doi.org/10.1111/cgf.70274","url":null,"abstract":"<p>We present <b>FlowCapX</b>, a physics-enhanced framework for flow reconstruction from sparse video inputs, addressing the challenge of jointly optimizing complex physical constraints and sparse observational data over long time horizons. Existing methods often struggle to capture turbulent motion while maintaining physical consistency, limiting reconstruction quality and downstream tasks. Focusing on velocity inference, our approach introduces a hybrid framework that strategically separates representation and supervision across spatial scales. At the coarse level, we resolve sparse-view ambiguities via a novel optimization strategy that aligns long-term observation with physics-grounded velocity fields. By emphasizing vorticity-based physical constraints, our method enhances physical fidelity and improves optimization stability. At the fine level, we prioritize observational fidelity to preserve critical turbulent structures. Extensive experiments demonstrate state-of-the-art velocity reconstruction, enabling velocity-aware downstream tasks, e.g., accurate flow analysis, scene augmentation with tracer visualization and re-simulation. Our implementation is released at ://github.com/taoningxiao/FlowCapX.git.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 7","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145297032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1