首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
ArchComplete: Autoregressive 3D architectural design generation with hierarchical diffusion-based upsampling ArchComplete:自回归3D建筑设计生成与分层扩散为基础的上采样
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-13 DOI: 10.1016/j.cag.2025.104477
Shervin Rasoulzadeh , Mathias Bank Stigsen , Iva Kovacic , Kristina Schinegger , Stefan Rutzinger , Michael Wimmer
Recent advances in 3D generative models have shown promising results but often fall short in capturing the complexity of architectural geometries and topologies. To tackle this, we present ArchComplete, a two-stage voxel-based 3D generative pipeline consisting of a vector-quantized model, whose composition is modeled with an autoregressive transformer for generating coarse shapes, followed by a set of multiscale diffusion models for augmenting with fine geometric details. Key to our pipeline is (i) learning a contextually rich codebook of local patch embeddings, optimized alongside a 2.5D perceptual loss that captures global spatial correspondence of projections onto three axis-aligned orthogonal planes, and (ii) redefining upsampling as a set of multiscale conditional diffusion models learning over a hierarchy of coarse-to-fine local volumetric patches, with a guided denoising process using 3D Gaussian windows that smooths noise estimates across overlapping patches during inference. Trained on our introduced dataset of 3D house models, ArchComplete autoregressively generates models at the resolution of 643 and progressively refines them up to 5123, with voxel sizes as small as 9cm. ArchComplete solves a variety of tasks, including genetic interpolation and variation, unconditional synthesis, shape and plan-drawing completion, as well as geometric detailization, while achieving state-of-the-art performance.
3D生成模型的最新进展显示出有希望的结果,但在捕捉建筑几何形状和拓扑结构的复杂性方面往往不足。为了解决这个问题,我们提出了ArchComplete,这是一个基于体素的两阶段3D生成管道,由矢量量化模型组成,其组成由自回归变压器建模,用于生成粗形状,然后是一组多尺度扩散模型,用于增强精细几何细节。我们的管道的关键是(i)学习上下文丰富的局部补丁嵌入码本,与2.5D感知损失一起优化,该感知损失捕获投影到三个轴线对齐的正交平面上的全局空间对应,以及(ii)将上采样重新定义为一组多尺度条件扩散模型,学习粗糙到精细的局部体积补丁的层次结构。在推理过程中,使用3D高斯窗口平滑重叠补丁之间的噪声估计。在我们引入的3D房屋模型数据集上进行训练,ArchComplete自动回归生成分辨率为643的模型,并逐步将其细化到5123,体素尺寸小至≈9cm。ArchComplete解决了各种任务,包括遗传插值和变异,无条件合成,形状和平面图完成,以及几何细节,同时实现了最先进的性能。
{"title":"ArchComplete: Autoregressive 3D architectural design generation with hierarchical diffusion-based upsampling","authors":"Shervin Rasoulzadeh ,&nbsp;Mathias Bank Stigsen ,&nbsp;Iva Kovacic ,&nbsp;Kristina Schinegger ,&nbsp;Stefan Rutzinger ,&nbsp;Michael Wimmer","doi":"10.1016/j.cag.2025.104477","DOIUrl":"10.1016/j.cag.2025.104477","url":null,"abstract":"<div><div>Recent advances in 3D generative models have shown promising results but often fall short in capturing the complexity of architectural geometries and topologies. To tackle this, we present ArchComplete, a two-stage voxel-based 3D generative pipeline consisting of a vector-quantized model, whose composition is modeled with an autoregressive transformer for generating coarse shapes, followed by a set of multiscale diffusion models for augmenting with fine geometric details. Key to our pipeline is (i) learning a contextually rich codebook of local patch embeddings, optimized alongside a 2.5D perceptual loss that captures global spatial correspondence of projections onto three axis-aligned orthogonal planes, and (ii) redefining upsampling as a set of multiscale conditional diffusion models learning over a hierarchy of coarse-to-fine local volumetric patches, with a guided denoising process using 3D Gaussian windows that smooths noise estimates across overlapping patches during inference. Trained on our introduced dataset of 3D house models, ArchComplete autoregressively generates models at the resolution of <span><math><mrow><mn>6</mn><msup><mrow><mn>4</mn></mrow><mrow><mn>3</mn></mrow></msup></mrow></math></span> and progressively refines them up to <span><math><mrow><mn>51</mn><msup><mrow><mn>2</mn></mrow><mrow><mn>3</mn></mrow></msup></mrow></math></span>, with voxel sizes as small as <span><math><mrow><mo>≈</mo><mn>9</mn><mspace></mspace><mtext>cm</mtext></mrow></math></span>. ArchComplete solves a variety of tasks, including genetic interpolation and variation, unconditional synthesis, shape and plan-drawing completion, as well as geometric detailization, while achieving state-of-the-art performance.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104477"},"PeriodicalIF":2.8,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145571819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualization and interaction techniques for single-text digital reading: A survey 单文本数字阅读的可视化和交互技术:综述
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-13 DOI: 10.1016/j.cag.2025.104481
Lei Han, Jiandan Song, Shuai Chen, Zhaoman Zhong
With the development of information technology, digital reading has become an important way to acquire knowledge. As text resources continue to grow, readers have an increasing need to efficiently understand key information from a single text. To address this challenge, visualization technologies are becoming useful tools for reading assistance. They help present text clearly, highlight important content, and improve reading efficiency. This paper reviews and summarizes recent representative studies on visualization and interaction techniques in single-text reading, and classifies existing methods from two core dimensions. First, by data type: (1) structural information, such as chapters and arguments; (2) content elements, such as data and charts; (3) user interaction data, including highlighting and annotation. Second, by technical approach: (1) Text Presentation Enhancement; (2) Information Content Enhancement; (3) Layout Optimization; (4) Interaction Enhancement. These techniques improve text display in different ways and support better understanding and memory. Based on this classification, the paper reviews the current development of relevant technologies, explores their application potential in academic, educational, and journalistic settings, and summarizes key functions and design concepts of typical reading assistance systems to provide references for future research and system design.
随着信息技术的发展,数字阅读已成为获取知识的重要途径。随着文本资源的不断增长,读者越来越需要有效地从单个文本中理解关键信息。为了应对这一挑战,可视化技术正在成为阅读辅助的有用工具。它们有助于清晰地呈现文本,突出重要内容,提高阅读效率。本文回顾和总结了近年来在单文本阅读中可视化和交互技术的代表性研究,并从两个核心维度对现有方法进行了分类。首先,按数据类型划分:(1)结构信息,如章节和参数;(2)内容元素,如数据和图表;(3)用户交互数据,包括高亮和标注。第二,从技术角度:(1)文本呈现增强;(2)信息内容增强;(3)布局优化;(4)互动性增强。这些技术以不同的方式改进文本显示,并支持更好的理解和记忆。在此基础上,回顾了相关技术的发展现状,探讨了其在学术、教育和新闻领域的应用潜力,总结了典型阅读辅助系统的关键功能和设计理念,为今后的研究和系统设计提供参考。
{"title":"Visualization and interaction techniques for single-text digital reading: A survey","authors":"Lei Han,&nbsp;Jiandan Song,&nbsp;Shuai Chen,&nbsp;Zhaoman Zhong","doi":"10.1016/j.cag.2025.104481","DOIUrl":"10.1016/j.cag.2025.104481","url":null,"abstract":"<div><div>With the development of information technology, digital reading has become an important way to acquire knowledge. As text resources continue to grow, readers have an increasing need to efficiently understand key information from a single text. To address this challenge, visualization technologies are becoming useful tools for reading assistance. They help present text clearly, highlight important content, and improve reading efficiency. This paper reviews and summarizes recent representative studies on visualization and interaction techniques in single-text reading, and classifies existing methods from two core dimensions. First, by data type: (1) structural information, such as chapters and arguments; (2) content elements, such as data and charts; (3) user interaction data, including highlighting and annotation. Second, by technical approach: (1) Text Presentation Enhancement; (2) Information Content Enhancement; (3) Layout Optimization; (4) Interaction Enhancement. These techniques improve text display in different ways and support better understanding and memory. Based on this classification, the paper reviews the current development of relevant technologies, explores their application potential in academic, educational, and journalistic settings, and summarizes key functions and design concepts of typical reading assistance systems to provide references for future research and system design.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104481"},"PeriodicalIF":2.8,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145571818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MESA-Net: Multi-Scale Enhanced Spatial Attention Network for medical image segmentation MESA-Net:用于医学图像分割的多尺度增强空间关注网络
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-13 DOI: 10.1016/j.cag.2025.104488
Demin Liu , Zhou Yang , Hua Wang , Huiyu Li , Fan Zhang
Medical image segmentation plays a critical role in enabling precise visualization and interaction within Extended Reality (XR) environments, which are increasingly used in surgical planning, image-guided interventions, and medical training. Transformer-based architectures have recently become a prominent approach for medical image segmentation due to their ability to capture long-range dependencies through self-attention mechanisms. However, these models often struggle to effectively extract local contextual information that is essential for accurate boundary delineation and fine-grained structure preservation. To address this issue, we propose Multi-Scale Enhanced Spatial Attention Network (MESA-Net), a novel architecture that synergistically combines global attention modeling with localized feature extraction. The network adopts an encoder–decoder structure, where the encoder leverages a pre-trained pyramid vision transformer v2 (PVTv2) to generate rich hierarchical representations. We design a position-aware spatial attention module and a multi-dimensional feature refinement module, which are integrated into the decoder to strengthen local context modeling and refine segmentation outputs. Comprehensive experiments on the Synapse and ACDC datasets demonstrate that MESA-Net achieves state-of-the-art performance, particularly in preserving fine anatomical structures. These improvements in segmentation quality provide a solid foundation for future XR applications, such as real-time interactive visualization and precise 3D reconstruction in clinical scenarios. Our method’s code will be released at: https://github.com/bukeyijuanjuan/MESA-Net.
医学图像分割在扩展现实(XR)环境中实现精确可视化和交互方面发挥着关键作用,扩展现实(XR)环境越来越多地用于手术计划、图像引导干预和医疗培训。基于变压器的体系结构最近成为医学图像分割的重要方法,因为它们能够通过自关注机制捕获远程依赖关系。然而,这些模型往往难以有效地提取局部上下文信息,而这些信息对于精确的边界描绘和细粒度结构保存至关重要。为了解决这一问题,我们提出了多尺度增强空间注意力网络(MESA-Net),这是一种将全局注意力建模与局部特征提取协同结合的新架构。该网络采用编码器-解码器结构,其中编码器利用预训练的金字塔视觉转换器v2 (PVTv2)生成丰富的分层表示。我们设计了一个位置感知空间注意模块和一个多维特征细化模块,将其集成到解码器中,以加强局部上下文建模并细化分割输出。对Synapse和ACDC数据集的综合实验表明,MESA-Net实现了最先进的性能,特别是在保存精细解剖结构方面。这些分割质量的改进为未来的XR应用提供了坚实的基础,例如临床场景中的实时交互式可视化和精确3D重建。我们的方法代码将在:https://github.com/bukeyijuanjuan/MESA-Net上发布。
{"title":"MESA-Net: Multi-Scale Enhanced Spatial Attention Network for medical image segmentation","authors":"Demin Liu ,&nbsp;Zhou Yang ,&nbsp;Hua Wang ,&nbsp;Huiyu Li ,&nbsp;Fan Zhang","doi":"10.1016/j.cag.2025.104488","DOIUrl":"10.1016/j.cag.2025.104488","url":null,"abstract":"<div><div>Medical image segmentation plays a critical role in enabling precise visualization and interaction within Extended Reality (XR) environments, which are increasingly used in surgical planning, image-guided interventions, and medical training. Transformer-based architectures have recently become a prominent approach for medical image segmentation due to their ability to capture long-range dependencies through self-attention mechanisms. However, these models often struggle to effectively extract local contextual information that is essential for accurate boundary delineation and fine-grained structure preservation. To address this issue, we propose Multi-Scale Enhanced Spatial Attention Network (MESA-Net), a novel architecture that synergistically combines global attention modeling with localized feature extraction. The network adopts an encoder–decoder structure, where the encoder leverages a pre-trained pyramid vision transformer v2 (PVTv2) to generate rich hierarchical representations. We design a position-aware spatial attention module and a multi-dimensional feature refinement module, which are integrated into the decoder to strengthen local context modeling and refine segmentation outputs. Comprehensive experiments on the Synapse and ACDC datasets demonstrate that MESA-Net achieves state-of-the-art performance, particularly in preserving fine anatomical structures. These improvements in segmentation quality provide a solid foundation for future XR applications, such as real-time interactive visualization and precise 3D reconstruction in clinical scenarios. Our method’s code will be released at: <span><span>https://github.com/bukeyijuanjuan/MESA-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104488"},"PeriodicalIF":2.8,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Render, Encode, Plan: A simple pipeline for hybrid RL-DL learning inside Unreal Engine 渲染,编码,计划:一个简单的管道混合RL-DL学习在虚幻引擎
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-13 DOI: 10.1016/j.cag.2025.104467
Daniele Della Pietra , Nicola Garau
Learning is an iterative process that requires multiple forms of interaction with the environment. During learning, we experience the world through the repetition of observations and actions, gaining an insight into which combination of these leads to the best results, according to our goals. The same paradigm has been applied to traditional reinforcement learning (RL) over the years, with impressive results in 3D navigation and planning. On the other hand, the computer vision community has been focusing mostly on vision-related tasks (e.g. classification, segmentation, depth estimation) using deep learning (DL). We present REP: Render, Encode, Plan, a unified framework to train embodied agents of different kinds (humanoids, vehicles, and drones) inside Unreal Engine, showing how a combination of RL and DL can help to shape intelligent agents that can better sense the surrounding environment. The main advantage of our method is the combination of different sensory modalities, including game state observations and vision features, that allow the agents to share a similar structure in their observations and rewards, while defining separate rewards based on their goals. We demonstrate impressive generalization capabilities on large-scale realistic 3D environments and on multiple dynamically changing scenarios, with different goals and rewards. All code, complete experiments, and environments will be available at https://mmlab-cv.github.io/REP/.
学习是一个迭代过程,需要与环境进行多种形式的互动。在学习过程中,我们通过重复观察和行动来体验世界,根据我们的目标,了解这些组合的哪种组合会导致最好的结果。多年来,传统的强化学习(RL)也应用了同样的范式,在3D导航和规划方面取得了令人印象深刻的成果。另一方面,计算机视觉社区主要关注使用深度学习(DL)的视觉相关任务(例如分类,分割,深度估计)。我们提出REP: Render, Encode, Plan,这是一个统一的框架,用于在虚幻引擎中训练不同类型的具体代理(人形,车辆和无人机),展示了RL和DL的结合如何帮助塑造能够更好地感知周围环境的智能代理。我们的方法的主要优势在于结合了不同的感官模式,包括游戏状态观察和视觉特征,这使得智能体在观察和奖励中共享相似的结构,同时根据他们的目标定义单独的奖励。我们在大规模逼真的3D环境和多个动态变化的场景中展示了令人印象深刻的泛化能力,具有不同的目标和奖励。所有的代码、完整的实验和环境都可以在https://mmlab-cv.github.io/REP/上获得。
{"title":"Render, Encode, Plan: A simple pipeline for hybrid RL-DL learning inside Unreal Engine","authors":"Daniele Della Pietra ,&nbsp;Nicola Garau","doi":"10.1016/j.cag.2025.104467","DOIUrl":"10.1016/j.cag.2025.104467","url":null,"abstract":"<div><div>Learning is an iterative process that requires multiple forms of interaction with the environment. During learning, we experience the world through the repetition of observations and actions, gaining an insight into which combination of these leads to the best results, according to our goals. The same paradigm has been applied to traditional reinforcement learning (RL) over the years, with impressive results in 3D navigation and planning. On the other hand, the computer vision community has been focusing mostly on vision-related tasks (e.g. classification, segmentation, depth estimation) using deep learning (DL). We present <strong>REP: Render, Encode, Plan</strong>, a unified framework to train embodied agents of different kinds (humanoids, vehicles, and drones) inside Unreal Engine, showing how a combination of RL and DL can help to shape intelligent agents that can better sense the surrounding environment. The main advantage of our method is the combination of different sensory modalities, including game state observations and vision features, that allow the agents to share a similar structure in their observations and rewards, while defining separate rewards based on their goals. We demonstrate impressive generalization capabilities on large-scale realistic 3D environments and on multiple dynamically changing scenarios, with different goals and rewards. All code, complete experiments, and environments will be available at <span><span>https://mmlab-cv.github.io/REP/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104467"},"PeriodicalIF":2.8,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145571815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training-free geometry-aware control for localized image viewpoint editing 用于局部图像视点编辑的无训练几何感知控制
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-12 DOI: 10.1016/j.cag.2025.104485
Lingfang Wang , Meiqing Wang , Hang Cheng , Jingyue Wang , Fei Chen
The great success of diffusion models in the text-to-image field has driven the increasing demand for fine-grained local image editing. One of which is changing the viewpoint of objects to given positions in accordance with 3D geometric principles. How to keep the surrounding region unchanged and maintain structural and semantic consistency when editing the designated objects is a challenging yet widely applicable task. However, existing methods often fail to maintain correct geometric structure and editing efficiency simultaneously. To this end, we explore the geometric structure changes of images when the viewpoint changes from the perspective of 3D camera projection and propose a geometry-aware local viewpoint editing approach that requires neither 3D reconstruction nor model training, and performs editing solely at a single timestep in the latent space of diffusion models. Central to our approach is constructing latent-space location mappings across different viewpoints by integrating multi-view geometry theory with the absolute depth information. To address assignment conflicts and latent feature missing problems while enhancing detail fidelity, we design an occlusion reasoning mechanism and a foreground-background aware bilateral interpolation strategy. Additionally, a consistency-preserving strategy is introduced to enhance alignment with the original image. Extensive experiments on image datasets demonstrate the overall advantages of our approach in structural consistency and runtime efficiency.
扩散模型在文本到图像领域的巨大成功推动了对细粒度局部图像编辑的需求不断增长。其中之一是根据三维几何原理将物体的视点改变到给定位置。在编辑指定对象时,如何保持周围区域不变,保持结构和语义的一致性是一项具有挑战性但应用广泛的任务。然而,现有的方法往往不能同时保持正确的几何结构和编辑效率。为此,我们从三维摄像机投影的角度探索视点变化时图像的几何结构变化,提出了一种不需要三维重建和模型训练,只在扩散模型潜在空间的单个时间步进行编辑的几何感知的局部视点编辑方法。该方法的核心是通过将多视图几何理论与绝对深度信息相结合,构建跨不同视点的潜在空间位置映射。为了在提高细节保真度的同时解决分配冲突和潜在特征缺失问题,我们设计了一种遮挡推理机制和前景-背景感知双边插值策略。此外,还引入了一致性保持策略来增强与原始图像的一致性。在图像数据集上的大量实验证明了我们的方法在结构一致性和运行时效率方面的总体优势。
{"title":"Training-free geometry-aware control for localized image viewpoint editing","authors":"Lingfang Wang ,&nbsp;Meiqing Wang ,&nbsp;Hang Cheng ,&nbsp;Jingyue Wang ,&nbsp;Fei Chen","doi":"10.1016/j.cag.2025.104485","DOIUrl":"10.1016/j.cag.2025.104485","url":null,"abstract":"<div><div>The great success of diffusion models in the text-to-image field has driven the increasing demand for fine-grained local image editing. One of which is changing the viewpoint of objects to given positions in accordance with 3D geometric principles. How to keep the surrounding region unchanged and maintain structural and semantic consistency when editing the designated objects is a challenging yet widely applicable task. However, existing methods often fail to maintain correct geometric structure and editing efficiency simultaneously. To this end, we explore the geometric structure changes of images when the viewpoint changes from the perspective of 3D camera projection and propose a geometry-aware local viewpoint editing approach that requires neither 3D reconstruction nor model training, and performs editing solely at a single timestep in the latent space of diffusion models. Central to our approach is constructing latent-space location mappings across different viewpoints by integrating multi-view geometry theory with the absolute depth information. To address assignment conflicts and latent feature missing problems while enhancing detail fidelity, we design an occlusion reasoning mechanism and a foreground-background aware bilateral interpolation strategy. Additionally, a consistency-preserving strategy is introduced to enhance alignment with the original image. Extensive experiments on image datasets demonstrate the overall advantages of our approach in structural consistency and runtime efficiency.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104485"},"PeriodicalIF":2.8,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145571817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SmartPoints: Enhanced local feature extraction and neighborhood diffusion network for 3D point cloud semantic segmentation SmartPoints:用于三维点云语义分割的增强局部特征提取和邻域扩散网络
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-11 DOI: 10.1016/j.cag.2025.104486
Ye Chen, Jian Lu, Jie Zhao, Xiaogai Chen, Kaibing Zhang
In recent years, transformer-based models have demonstrated strong performance in global information extraction. However, in 3D point cloud segmentation, such models still fall short when it comes to capturing local features and accurately identifying geometric and topological relationships. To address the resulting insufficiency in local feature extraction, we propose an enhanced local feature extraction and neighborhood diffusion network for 3D point cloud semantic segmentation (SmartPoints). First, our method aggregates local features from the input point set using a hierarchical feature fusion module (HFF), which enhances information interaction and dependency between different local regions. Next, the dual local topological structure perception module (DLTP) constructs two local topologies using positional and semantic information, respectively. An adaptive dynamic kernel is then designed to capture the mapping between the two local topologies, enhancing local feature representation. To address the challenge of unclear local neighborhood edge distinctions, which often lead to segmentation errors, we design a local neighborhood diffusion module (LND). This module achieves precise edge segmentation by enhancing target region features and suppressing non-target region features. Extensive experiments on benchmark datasets such as S3DIS, ScanNetV2 and SemanticKITTI demonstrate the superior segmentation performance of the proposed SmartPoints.
近年来,基于变压器的模型在全局信息提取中表现出了较强的性能。然而,在三维点云分割中,这种模型在捕捉局部特征和准确识别几何和拓扑关系方面仍然存在不足。为了解决局部特征提取的不足,我们提出了一种增强的局部特征提取和邻域扩散网络,用于3D点云语义分割(SmartPoints)。首先,该方法利用层次特征融合模块(HFF)从输入点集中聚合局部特征,增强了不同局部区域之间的信息交互和依赖关系;其次,双局部拓扑结构感知模块(dual local topology structure perception module, DLTP)分别利用位置信息和语义信息构建两个局部拓扑。然后设计了一个自适应动态核来捕获两个局部拓扑之间的映射,增强了局部特征表示。为了解决局部邻域边缘区分不清导致分割错误的问题,我们设计了一个局部邻域扩散模块(LND)。该模块通过增强目标区域特征和抑制非目标区域特征来实现精确的边缘分割。在S3DIS、ScanNetV2和SemanticKITTI等基准数据集上进行的大量实验证明了所提出的SmartPoints具有优越的分割性能。
{"title":"SmartPoints: Enhanced local feature extraction and neighborhood diffusion network for 3D point cloud semantic segmentation","authors":"Ye Chen,&nbsp;Jian Lu,&nbsp;Jie Zhao,&nbsp;Xiaogai Chen,&nbsp;Kaibing Zhang","doi":"10.1016/j.cag.2025.104486","DOIUrl":"10.1016/j.cag.2025.104486","url":null,"abstract":"<div><div>In recent years, transformer-based models have demonstrated strong performance in global information extraction. However, in 3D point cloud segmentation, such models still fall short when it comes to capturing local features and accurately identifying geometric and topological relationships. To address the resulting insufficiency in local feature extraction, we propose an enhanced local feature extraction and neighborhood diffusion network for 3D point cloud semantic segmentation (SmartPoints). First, our method aggregates local features from the input point set using a hierarchical feature fusion module (HFF), which enhances information interaction and dependency between different local regions. Next, the dual local topological structure perception module (DLTP) constructs two local topologies using positional and semantic information, respectively. An adaptive dynamic kernel is then designed to capture the mapping between the two local topologies, enhancing local feature representation. To address the challenge of unclear local neighborhood edge distinctions, which often lead to segmentation errors, we design a local neighborhood diffusion module (LND). This module achieves precise edge segmentation by enhancing target region features and suppressing non-target region features. Extensive experiments on benchmark datasets such as S3DIS, ScanNetV2 and SemanticKITTI demonstrate the superior segmentation performance of the proposed SmartPoints.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104486"},"PeriodicalIF":2.8,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parametric model fitting for textured and animatable 3D avatar from a single frontal image of a clothed human 参数模型拟合纹理和动画的三维化身从单一的正面图像穿衣服的人
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-11 DOI: 10.1016/j.cag.2025.104478
Fares Mallek, Carlos Vázquez, Eric Paquette
In this paper, we tackle the challenge of three-dimensional estimation of expressive, animatable, and textured human avatars from a single frontal image. Leveraging a Skinned Multi-Person Linear (SMPL) parametric body model, we adjust the model parameters to faithfully reflect the shape and pose of the individual, relying on the mesh generated by a Pixel-aligned Implicit Function (PIFu) model. To robustly infer the SMPL parameters, we deploy a multi-step optimization process. Initially, we recover the position of 2D joints using an existing pose estimation tool. Subsequently, we utilize the 3D PIFu mesh together with the 2D pose to estimate the 3D position of joints. In the subsequent step, we adapt the body’s parametric model to the 3D joints through rigid alignment, optimizing for global translation and rotation. This step provides a robust initialization for further refinement of shape and pose parameters. The next step involves optimizing the pose and the first component of the SMPL shape parameters while imposing constraints to enhance model robustness. We then refine the SMPL model pose and shape parameters by adding two new registration loss terms to the optimization cost function: a point-to-surface distance and a Chamfer distance. Finally, we introduce a refinement process utilizing a deformation vector field applied to the SMPL mesh, enabling more faithful modeling of tight to loose clothing geometry. As most other works, we optimize based on images of people wearing shoes, resulting in artifacts in the toes region of SMPL. We thus introduce a new shoe-like mesh topology which greatly improves the quality of the reconstructed feet. A notable advantage of our approach is the ability to generate detailed avatars with fewer vertices compared to previous research, enhancing computational efficiency while maintaining high fidelity. We also demonstrate how to gain even more details, while maintaining the advantages of SMPL. To complete our model, we design a texture extraction and completion approach. Our entirely automated approach was evaluated against recognized benchmarks, X-Avatar and PeopleSnapshot, showcasing competitive performance against state-of-the-art methods. This approach contributes to advancing 3D modeling techniques, particularly in the realms of interactive applications, animation, and video games. We will make our code and our improved SMPL mesh topology available to the community: https://github.com/ETS-BodyModeling/ImplicitParametricAvatar.
在本文中,我们解决了从单个正面图像中对富有表现力、可动画化和纹理化的人类化身进行三维估计的挑战。利用蒙皮多人线性(SMPL)参数化身体模型,我们根据像素对齐隐函数(PIFu)模型生成的网格,调整模型参数以忠实地反映个体的形状和姿态。为了稳健地推断SMPL参数,我们部署了一个多步骤优化过程。首先,我们使用现有的姿态估计工具恢复2D关节的位置。随后,我们利用三维PIFu网格和二维姿态来估计关节的三维位置。在接下来的步骤中,我们通过刚性对准,优化全局平移和旋转,使身体的参数化模型适应三维关节。这一步为进一步细化形状和姿态参数提供了一个健壮的初始化。下一步包括优化姿态和SMPL形状参数的第一个组成部分,同时施加约束以增强模型的鲁棒性。然后,我们通过向优化成本函数中添加两个新的配准损失项:点到面距离和倒角距离,来改进SMPL模型的姿态和形状参数。最后,我们介绍了一种利用变形向量场应用于SMPL网格的细化过程,从而能够更忠实地建模紧身和宽松的服装几何形状。与大多数其他工作一样,我们基于穿鞋的人的图像进行优化,导致SMPL的脚趾区域出现伪影。因此,我们引入了一种新的类似鞋子的网格拓扑结构,大大提高了重建脚的质量。与以前的研究相比,我们的方法的一个显著优势是能够用更少的顶点生成详细的头像,在保持高保真度的同时提高计算效率。我们还将演示如何在保持SMPL优势的同时获得更多细节。为了完善我们的模型,我们设计了一种纹理提取和补全方法。我们的完全自动化的方法经过了X-Avatar和PeopleSnapshot等公认基准的评估,展示了与最先进的方法相比具有竞争力的性能。这种方法有助于推进3D建模技术,特别是在交互式应用程序、动画和视频游戏领域。我们将把我们的代码和改进的SMPL网格拓扑提供给社区:https://github.com/ETS-BodyModeling/ImplicitParametricAvatar。
{"title":"Parametric model fitting for textured and animatable 3D avatar from a single frontal image of a clothed human","authors":"Fares Mallek,&nbsp;Carlos Vázquez,&nbsp;Eric Paquette","doi":"10.1016/j.cag.2025.104478","DOIUrl":"10.1016/j.cag.2025.104478","url":null,"abstract":"<div><div>In this paper, we tackle the challenge of three-dimensional estimation of expressive, animatable, and textured human avatars from a single frontal image. Leveraging a Skinned Multi-Person Linear (SMPL) parametric body model, we adjust the model parameters to faithfully reflect the shape and pose of the individual, relying on the mesh generated by a Pixel-aligned Implicit Function (PIFu) model. To robustly infer the SMPL parameters, we deploy a multi-step optimization process. Initially, we recover the position of 2D joints using an existing pose estimation tool. Subsequently, we utilize the 3D PIFu mesh together with the 2D pose to estimate the 3D position of joints. In the subsequent step, we adapt the body’s parametric model to the 3D joints through rigid alignment, optimizing for global translation and rotation. This step provides a robust initialization for further refinement of shape and pose parameters. The next step involves optimizing the pose and the first component of the SMPL shape parameters while imposing constraints to enhance model robustness. We then refine the SMPL model pose and shape parameters by adding two new registration loss terms to the optimization cost function: a point-to-surface distance and a Chamfer distance. Finally, we introduce a refinement process utilizing a deformation vector field applied to the SMPL mesh, enabling more faithful modeling of tight to loose clothing geometry. As most other works, we optimize based on images of people wearing shoes, resulting in artifacts in the toes region of SMPL. We thus introduce a new shoe-like mesh topology which greatly improves the quality of the reconstructed feet. A notable advantage of our approach is the ability to generate detailed avatars with fewer vertices compared to previous research, enhancing computational efficiency while maintaining high fidelity. We also demonstrate how to gain even more details, while maintaining the advantages of SMPL. To complete our model, we design a texture extraction and completion approach. Our entirely automated approach was evaluated against recognized benchmarks, X-Avatar and PeopleSnapshot, showcasing competitive performance against state-of-the-art methods. This approach contributes to advancing 3D modeling techniques, particularly in the realms of interactive applications, animation, and video games. We will make our code and our improved SMPL mesh topology available to the community: <span><span>https://github.com/ETS-BodyModeling/ImplicitParametricAvatar</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104478"},"PeriodicalIF":2.8,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MonoNeRF-DDP: Neural radiance fields from monocular endoscopic images with dense depth priors MonoNeRF-DDP:具有密集深度先验的单眼内窥镜图像的神经辐射场
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-11 DOI: 10.1016/j.cag.2025.104487
Jinhua Liu , Dongjin Huang , Yongsheng Shi , Jiantao Qu
Synthesizing novel views from monocular endoscopic images is challenging due to sparse input views, occlusion of invalid regions, and soft tissue deformation. To tackle these challenges, we propose the neural radiance fields from monocular endoscopic images with dense depth priors, called MonoNeRF-DDP. The algorithm consists of two parts: preprocessing and normative depth-assisted reconstruction. In the preprocessing part, we use labelme to obtain mask images for invalid regions in endoscopy images, preventing their reconstruction. Then, to address the view sparsity problem, we fine-tuned a monocular depth estimation network to predict dense depth maps, enabling the recovery of scene depth information from sparse views during the neural radiance fields optimization process. In the normative depth-assisted reconstruction, to deal with the issues of soft tissue deformation and inaccurate depth information, we adopt neural radiance fields for dynamic scenes to take mask images and dense depth maps as additional inputs and utilize the proposed adaptive loss function to achieve self-supervised training. Experimental results show that MonoNeRF-DDP outperforms the best average values of competing algorithms across the real monocular endoscopic image dataset GastroSynth. MonoNeRF-DDP can reconstruct structurally accurate shapes, fine details, and highly realistic textures with only about 15 input images. Furthermore, a study of 14 medical-related participants indicates that MonoNeRF-DDP can more accurately observe the details of the disease sites and make more reliable preoperative diagnoses.
由于输入视图稀疏、无效区域遮挡和软组织变形,从单眼内窥镜图像合成新视图具有挑战性。为了解决这些挑战,我们提出了具有密集深度先验的单眼内窥镜图像的神经辐射场,称为MonoNeRF-DDP。该算法由预处理和规范深度辅助重建两部分组成。在预处理部分,我们使用标签对内窥镜图像中的无效区域进行掩码,防止其重构。然后,为了解决视图稀疏性问题,我们对单目深度估计网络进行了微调,以预测密集深度图,从而在神经辐射场优化过程中从稀疏视图中恢复场景深度信息。在规范的深度辅助重建中,为了解决软组织变形和深度信息不准确的问题,我们采用动态场景的神经辐射场,以掩模图像和密集深度图作为附加输入,利用提出的自适应损失函数实现自监督训练。实验结果表明,MonoNeRF-DDP在真实单眼内窥镜图像数据集GastroSynth上优于竞争算法的最佳平均值。MonoNeRF-DDP可以重建结构精确的形状,精细的细节,高度逼真的纹理,只有大约15个输入图像。此外,一项对14名医学相关参与者的研究表明,MonoNeRF-DDP可以更准确地观察疾病部位的细节,做出更可靠的术前诊断。
{"title":"MonoNeRF-DDP: Neural radiance fields from monocular endoscopic images with dense depth priors","authors":"Jinhua Liu ,&nbsp;Dongjin Huang ,&nbsp;Yongsheng Shi ,&nbsp;Jiantao Qu","doi":"10.1016/j.cag.2025.104487","DOIUrl":"10.1016/j.cag.2025.104487","url":null,"abstract":"<div><div>Synthesizing novel views from monocular endoscopic images is challenging due to sparse input views, occlusion of invalid regions, and soft tissue deformation. To tackle these challenges, we propose the neural radiance fields from monocular endoscopic images with dense depth priors, called MonoNeRF-DDP. The algorithm consists of two parts: preprocessing and normative depth-assisted reconstruction. In the preprocessing part, we use labelme to obtain mask images for invalid regions in endoscopy images, preventing their reconstruction. Then, to address the view sparsity problem, we fine-tuned a monocular depth estimation network to predict dense depth maps, enabling the recovery of scene depth information from sparse views during the neural radiance fields optimization process. In the normative depth-assisted reconstruction, to deal with the issues of soft tissue deformation and inaccurate depth information, we adopt neural radiance fields for dynamic scenes to take mask images and dense depth maps as additional inputs and utilize the proposed adaptive loss function to achieve self-supervised training. Experimental results show that MonoNeRF-DDP outperforms the best average values of competing algorithms across the real monocular endoscopic image dataset GastroSynth. MonoNeRF-DDP can reconstruct structurally accurate shapes, fine details, and highly realistic textures with only about 15 input images. Furthermore, a study of 14 medical-related participants indicates that MonoNeRF-DDP can more accurately observe the details of the disease sites and make more reliable preoperative diagnoses.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104487"},"PeriodicalIF":2.8,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automating visual narratives: Learning cinematic camera perspectives from 3D human interaction 自动化视觉叙事:从3D人类互动中学习电影摄像机视角
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-10 DOI: 10.1016/j.cag.2025.104484
Boyuan Cheng, Shang Ni, Jian Jun Zhang, Xiaosong Yang
Cinematic camera control is essential for guiding audience attention and conveying narrative intent, yet current data-driven methods largely rely on predefined visual datasets and handcrafted rules, limiting generalization and creativity. This paper introduces a novel diffusion-based framework that generates camera trajectories directly from two-character 3D motion sequences, eliminating the need for paired video–camera annotations. The approach leverages Toric features to encode spatial relations between characters and conditions the diffusion process through a dual-stream motion encoder and interaction module, enabling the camera to adapt dynamically to evolving character interactions. A new dataset linking character motion with camera parameters is constructed to train and evaluate the model. Experiments demonstrate that our method outperforms strong baselines in both quantitative metrics and perceptual quality, producing camera motions that are smooth, temporally coherent, and compositionally consistent with cinematic conventions. This work opens new opportunities for automating virtual cinematography in animation, gaming, and interactive media.
电影镜头控制对于引导观众注意力和传达叙事意图至关重要,但目前的数据驱动方法主要依赖于预定义的视觉数据集和手工制作的规则,限制了泛化和创造力。本文介绍了一种新的基于扩散的框架,该框架直接从两个字符的3D运动序列中生成摄像机轨迹,从而消除了对摄像机配对注释的需要。该方法利用Toric特征对角色之间的空间关系进行编码,并通过双流运动编码器和交互模块调节扩散过程,使摄像机能够动态适应不断变化的角色交互。建立了一个新的数据集,将人物运动与摄像机参数联系起来,对模型进行训练和评估。实验表明,我们的方法在定量指标和感知质量方面都优于强基线,产生的摄像机运动平滑,时间连贯,并且在构图上与电影惯例一致。这项工作为动画、游戏和互动媒体中的虚拟电影自动化开辟了新的机会。
{"title":"Automating visual narratives: Learning cinematic camera perspectives from 3D human interaction","authors":"Boyuan Cheng,&nbsp;Shang Ni,&nbsp;Jian Jun Zhang,&nbsp;Xiaosong Yang","doi":"10.1016/j.cag.2025.104484","DOIUrl":"10.1016/j.cag.2025.104484","url":null,"abstract":"<div><div>Cinematic camera control is essential for guiding audience attention and conveying narrative intent, yet current data-driven methods largely rely on predefined visual datasets and handcrafted rules, limiting generalization and creativity. This paper introduces a novel diffusion-based framework that generates camera trajectories directly from two-character 3D motion sequences, eliminating the need for paired video–camera annotations. The approach leverages Toric features to encode spatial relations between characters and conditions the diffusion process through a dual-stream motion encoder and interaction module, enabling the camera to adapt dynamically to evolving character interactions. A new dataset linking character motion with camera parameters is constructed to train and evaluate the model. Experiments demonstrate that our method outperforms strong baselines in both quantitative metrics and perceptual quality, producing camera motions that are smooth, temporally coherent, and compositionally consistent with cinematic conventions. This work opens new opportunities for automating virtual cinematography in animation, gaming, and interactive media.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104484"},"PeriodicalIF":2.8,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to special section on expressive media 表现性媒体专题部分前言
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-09 DOI: 10.1016/j.cag.2025.104483
Chiara Eva Catalano , Amal Dev Parakkat , Marc Christie
{"title":"Foreword to special section on expressive media","authors":"Chiara Eva Catalano ,&nbsp;Amal Dev Parakkat ,&nbsp;Marc Christie","doi":"10.1016/j.cag.2025.104483","DOIUrl":"10.1016/j.cag.2025.104483","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104483"},"PeriodicalIF":2.8,"publicationDate":"2025-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1