首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
MESA-Net: Multi-Scale Enhanced Spatial Attention Network for medical image segmentation MESA-Net:用于医学图像分割的多尺度增强空间关注网络
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-13 DOI: 10.1016/j.cag.2025.104488
Demin Liu , Zhou Yang , Hua Wang , Huiyu Li , Fan Zhang
Medical image segmentation plays a critical role in enabling precise visualization and interaction within Extended Reality (XR) environments, which are increasingly used in surgical planning, image-guided interventions, and medical training. Transformer-based architectures have recently become a prominent approach for medical image segmentation due to their ability to capture long-range dependencies through self-attention mechanisms. However, these models often struggle to effectively extract local contextual information that is essential for accurate boundary delineation and fine-grained structure preservation. To address this issue, we propose Multi-Scale Enhanced Spatial Attention Network (MESA-Net), a novel architecture that synergistically combines global attention modeling with localized feature extraction. The network adopts an encoder–decoder structure, where the encoder leverages a pre-trained pyramid vision transformer v2 (PVTv2) to generate rich hierarchical representations. We design a position-aware spatial attention module and a multi-dimensional feature refinement module, which are integrated into the decoder to strengthen local context modeling and refine segmentation outputs. Comprehensive experiments on the Synapse and ACDC datasets demonstrate that MESA-Net achieves state-of-the-art performance, particularly in preserving fine anatomical structures. These improvements in segmentation quality provide a solid foundation for future XR applications, such as real-time interactive visualization and precise 3D reconstruction in clinical scenarios. Our method’s code will be released at: https://github.com/bukeyijuanjuan/MESA-Net.
医学图像分割在扩展现实(XR)环境中实现精确可视化和交互方面发挥着关键作用,扩展现实(XR)环境越来越多地用于手术计划、图像引导干预和医疗培训。基于变压器的体系结构最近成为医学图像分割的重要方法,因为它们能够通过自关注机制捕获远程依赖关系。然而,这些模型往往难以有效地提取局部上下文信息,而这些信息对于精确的边界描绘和细粒度结构保存至关重要。为了解决这一问题,我们提出了多尺度增强空间注意力网络(MESA-Net),这是一种将全局注意力建模与局部特征提取协同结合的新架构。该网络采用编码器-解码器结构,其中编码器利用预训练的金字塔视觉转换器v2 (PVTv2)生成丰富的分层表示。我们设计了一个位置感知空间注意模块和一个多维特征细化模块,将其集成到解码器中,以加强局部上下文建模并细化分割输出。对Synapse和ACDC数据集的综合实验表明,MESA-Net实现了最先进的性能,特别是在保存精细解剖结构方面。这些分割质量的改进为未来的XR应用提供了坚实的基础,例如临床场景中的实时交互式可视化和精确3D重建。我们的方法代码将在:https://github.com/bukeyijuanjuan/MESA-Net上发布。
{"title":"MESA-Net: Multi-Scale Enhanced Spatial Attention Network for medical image segmentation","authors":"Demin Liu ,&nbsp;Zhou Yang ,&nbsp;Hua Wang ,&nbsp;Huiyu Li ,&nbsp;Fan Zhang","doi":"10.1016/j.cag.2025.104488","DOIUrl":"10.1016/j.cag.2025.104488","url":null,"abstract":"<div><div>Medical image segmentation plays a critical role in enabling precise visualization and interaction within Extended Reality (XR) environments, which are increasingly used in surgical planning, image-guided interventions, and medical training. Transformer-based architectures have recently become a prominent approach for medical image segmentation due to their ability to capture long-range dependencies through self-attention mechanisms. However, these models often struggle to effectively extract local contextual information that is essential for accurate boundary delineation and fine-grained structure preservation. To address this issue, we propose Multi-Scale Enhanced Spatial Attention Network (MESA-Net), a novel architecture that synergistically combines global attention modeling with localized feature extraction. The network adopts an encoder–decoder structure, where the encoder leverages a pre-trained pyramid vision transformer v2 (PVTv2) to generate rich hierarchical representations. We design a position-aware spatial attention module and a multi-dimensional feature refinement module, which are integrated into the decoder to strengthen local context modeling and refine segmentation outputs. Comprehensive experiments on the Synapse and ACDC datasets demonstrate that MESA-Net achieves state-of-the-art performance, particularly in preserving fine anatomical structures. These improvements in segmentation quality provide a solid foundation for future XR applications, such as real-time interactive visualization and precise 3D reconstruction in clinical scenarios. Our method’s code will be released at: <span><span>https://github.com/bukeyijuanjuan/MESA-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104488"},"PeriodicalIF":2.8,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Render, Encode, Plan: A simple pipeline for hybrid RL-DL learning inside Unreal Engine 渲染,编码,计划:一个简单的管道混合RL-DL学习在虚幻引擎
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-13 DOI: 10.1016/j.cag.2025.104467
Daniele Della Pietra , Nicola Garau
Learning is an iterative process that requires multiple forms of interaction with the environment. During learning, we experience the world through the repetition of observations and actions, gaining an insight into which combination of these leads to the best results, according to our goals. The same paradigm has been applied to traditional reinforcement learning (RL) over the years, with impressive results in 3D navigation and planning. On the other hand, the computer vision community has been focusing mostly on vision-related tasks (e.g. classification, segmentation, depth estimation) using deep learning (DL). We present REP: Render, Encode, Plan, a unified framework to train embodied agents of different kinds (humanoids, vehicles, and drones) inside Unreal Engine, showing how a combination of RL and DL can help to shape intelligent agents that can better sense the surrounding environment. The main advantage of our method is the combination of different sensory modalities, including game state observations and vision features, that allow the agents to share a similar structure in their observations and rewards, while defining separate rewards based on their goals. We demonstrate impressive generalization capabilities on large-scale realistic 3D environments and on multiple dynamically changing scenarios, with different goals and rewards. All code, complete experiments, and environments will be available at https://mmlab-cv.github.io/REP/.
学习是一个迭代过程,需要与环境进行多种形式的互动。在学习过程中,我们通过重复观察和行动来体验世界,根据我们的目标,了解这些组合的哪种组合会导致最好的结果。多年来,传统的强化学习(RL)也应用了同样的范式,在3D导航和规划方面取得了令人印象深刻的成果。另一方面,计算机视觉社区主要关注使用深度学习(DL)的视觉相关任务(例如分类,分割,深度估计)。我们提出REP: Render, Encode, Plan,这是一个统一的框架,用于在虚幻引擎中训练不同类型的具体代理(人形,车辆和无人机),展示了RL和DL的结合如何帮助塑造能够更好地感知周围环境的智能代理。我们的方法的主要优势在于结合了不同的感官模式,包括游戏状态观察和视觉特征,这使得智能体在观察和奖励中共享相似的结构,同时根据他们的目标定义单独的奖励。我们在大规模逼真的3D环境和多个动态变化的场景中展示了令人印象深刻的泛化能力,具有不同的目标和奖励。所有的代码、完整的实验和环境都可以在https://mmlab-cv.github.io/REP/上获得。
{"title":"Render, Encode, Plan: A simple pipeline for hybrid RL-DL learning inside Unreal Engine","authors":"Daniele Della Pietra ,&nbsp;Nicola Garau","doi":"10.1016/j.cag.2025.104467","DOIUrl":"10.1016/j.cag.2025.104467","url":null,"abstract":"<div><div>Learning is an iterative process that requires multiple forms of interaction with the environment. During learning, we experience the world through the repetition of observations and actions, gaining an insight into which combination of these leads to the best results, according to our goals. The same paradigm has been applied to traditional reinforcement learning (RL) over the years, with impressive results in 3D navigation and planning. On the other hand, the computer vision community has been focusing mostly on vision-related tasks (e.g. classification, segmentation, depth estimation) using deep learning (DL). We present <strong>REP: Render, Encode, Plan</strong>, a unified framework to train embodied agents of different kinds (humanoids, vehicles, and drones) inside Unreal Engine, showing how a combination of RL and DL can help to shape intelligent agents that can better sense the surrounding environment. The main advantage of our method is the combination of different sensory modalities, including game state observations and vision features, that allow the agents to share a similar structure in their observations and rewards, while defining separate rewards based on their goals. We demonstrate impressive generalization capabilities on large-scale realistic 3D environments and on multiple dynamically changing scenarios, with different goals and rewards. All code, complete experiments, and environments will be available at <span><span>https://mmlab-cv.github.io/REP/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104467"},"PeriodicalIF":2.8,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145571815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training-free geometry-aware control for localized image viewpoint editing 用于局部图像视点编辑的无训练几何感知控制
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-12 DOI: 10.1016/j.cag.2025.104485
Lingfang Wang , Meiqing Wang , Hang Cheng , Jingyue Wang , Fei Chen
The great success of diffusion models in the text-to-image field has driven the increasing demand for fine-grained local image editing. One of which is changing the viewpoint of objects to given positions in accordance with 3D geometric principles. How to keep the surrounding region unchanged and maintain structural and semantic consistency when editing the designated objects is a challenging yet widely applicable task. However, existing methods often fail to maintain correct geometric structure and editing efficiency simultaneously. To this end, we explore the geometric structure changes of images when the viewpoint changes from the perspective of 3D camera projection and propose a geometry-aware local viewpoint editing approach that requires neither 3D reconstruction nor model training, and performs editing solely at a single timestep in the latent space of diffusion models. Central to our approach is constructing latent-space location mappings across different viewpoints by integrating multi-view geometry theory with the absolute depth information. To address assignment conflicts and latent feature missing problems while enhancing detail fidelity, we design an occlusion reasoning mechanism and a foreground-background aware bilateral interpolation strategy. Additionally, a consistency-preserving strategy is introduced to enhance alignment with the original image. Extensive experiments on image datasets demonstrate the overall advantages of our approach in structural consistency and runtime efficiency.
扩散模型在文本到图像领域的巨大成功推动了对细粒度局部图像编辑的需求不断增长。其中之一是根据三维几何原理将物体的视点改变到给定位置。在编辑指定对象时,如何保持周围区域不变,保持结构和语义的一致性是一项具有挑战性但应用广泛的任务。然而,现有的方法往往不能同时保持正确的几何结构和编辑效率。为此,我们从三维摄像机投影的角度探索视点变化时图像的几何结构变化,提出了一种不需要三维重建和模型训练,只在扩散模型潜在空间的单个时间步进行编辑的几何感知的局部视点编辑方法。该方法的核心是通过将多视图几何理论与绝对深度信息相结合,构建跨不同视点的潜在空间位置映射。为了在提高细节保真度的同时解决分配冲突和潜在特征缺失问题,我们设计了一种遮挡推理机制和前景-背景感知双边插值策略。此外,还引入了一致性保持策略来增强与原始图像的一致性。在图像数据集上的大量实验证明了我们的方法在结构一致性和运行时效率方面的总体优势。
{"title":"Training-free geometry-aware control for localized image viewpoint editing","authors":"Lingfang Wang ,&nbsp;Meiqing Wang ,&nbsp;Hang Cheng ,&nbsp;Jingyue Wang ,&nbsp;Fei Chen","doi":"10.1016/j.cag.2025.104485","DOIUrl":"10.1016/j.cag.2025.104485","url":null,"abstract":"<div><div>The great success of diffusion models in the text-to-image field has driven the increasing demand for fine-grained local image editing. One of which is changing the viewpoint of objects to given positions in accordance with 3D geometric principles. How to keep the surrounding region unchanged and maintain structural and semantic consistency when editing the designated objects is a challenging yet widely applicable task. However, existing methods often fail to maintain correct geometric structure and editing efficiency simultaneously. To this end, we explore the geometric structure changes of images when the viewpoint changes from the perspective of 3D camera projection and propose a geometry-aware local viewpoint editing approach that requires neither 3D reconstruction nor model training, and performs editing solely at a single timestep in the latent space of diffusion models. Central to our approach is constructing latent-space location mappings across different viewpoints by integrating multi-view geometry theory with the absolute depth information. To address assignment conflicts and latent feature missing problems while enhancing detail fidelity, we design an occlusion reasoning mechanism and a foreground-background aware bilateral interpolation strategy. Additionally, a consistency-preserving strategy is introduced to enhance alignment with the original image. Extensive experiments on image datasets demonstrate the overall advantages of our approach in structural consistency and runtime efficiency.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104485"},"PeriodicalIF":2.8,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145571817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SmartPoints: Enhanced local feature extraction and neighborhood diffusion network for 3D point cloud semantic segmentation SmartPoints:用于三维点云语义分割的增强局部特征提取和邻域扩散网络
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-11 DOI: 10.1016/j.cag.2025.104486
Ye Chen, Jian Lu, Jie Zhao, Xiaogai Chen, Kaibing Zhang
In recent years, transformer-based models have demonstrated strong performance in global information extraction. However, in 3D point cloud segmentation, such models still fall short when it comes to capturing local features and accurately identifying geometric and topological relationships. To address the resulting insufficiency in local feature extraction, we propose an enhanced local feature extraction and neighborhood diffusion network for 3D point cloud semantic segmentation (SmartPoints). First, our method aggregates local features from the input point set using a hierarchical feature fusion module (HFF), which enhances information interaction and dependency between different local regions. Next, the dual local topological structure perception module (DLTP) constructs two local topologies using positional and semantic information, respectively. An adaptive dynamic kernel is then designed to capture the mapping between the two local topologies, enhancing local feature representation. To address the challenge of unclear local neighborhood edge distinctions, which often lead to segmentation errors, we design a local neighborhood diffusion module (LND). This module achieves precise edge segmentation by enhancing target region features and suppressing non-target region features. Extensive experiments on benchmark datasets such as S3DIS, ScanNetV2 and SemanticKITTI demonstrate the superior segmentation performance of the proposed SmartPoints.
近年来,基于变压器的模型在全局信息提取中表现出了较强的性能。然而,在三维点云分割中,这种模型在捕捉局部特征和准确识别几何和拓扑关系方面仍然存在不足。为了解决局部特征提取的不足,我们提出了一种增强的局部特征提取和邻域扩散网络,用于3D点云语义分割(SmartPoints)。首先,该方法利用层次特征融合模块(HFF)从输入点集中聚合局部特征,增强了不同局部区域之间的信息交互和依赖关系;其次,双局部拓扑结构感知模块(dual local topology structure perception module, DLTP)分别利用位置信息和语义信息构建两个局部拓扑。然后设计了一个自适应动态核来捕获两个局部拓扑之间的映射,增强了局部特征表示。为了解决局部邻域边缘区分不清导致分割错误的问题,我们设计了一个局部邻域扩散模块(LND)。该模块通过增强目标区域特征和抑制非目标区域特征来实现精确的边缘分割。在S3DIS、ScanNetV2和SemanticKITTI等基准数据集上进行的大量实验证明了所提出的SmartPoints具有优越的分割性能。
{"title":"SmartPoints: Enhanced local feature extraction and neighborhood diffusion network for 3D point cloud semantic segmentation","authors":"Ye Chen,&nbsp;Jian Lu,&nbsp;Jie Zhao,&nbsp;Xiaogai Chen,&nbsp;Kaibing Zhang","doi":"10.1016/j.cag.2025.104486","DOIUrl":"10.1016/j.cag.2025.104486","url":null,"abstract":"<div><div>In recent years, transformer-based models have demonstrated strong performance in global information extraction. However, in 3D point cloud segmentation, such models still fall short when it comes to capturing local features and accurately identifying geometric and topological relationships. To address the resulting insufficiency in local feature extraction, we propose an enhanced local feature extraction and neighborhood diffusion network for 3D point cloud semantic segmentation (SmartPoints). First, our method aggregates local features from the input point set using a hierarchical feature fusion module (HFF), which enhances information interaction and dependency between different local regions. Next, the dual local topological structure perception module (DLTP) constructs two local topologies using positional and semantic information, respectively. An adaptive dynamic kernel is then designed to capture the mapping between the two local topologies, enhancing local feature representation. To address the challenge of unclear local neighborhood edge distinctions, which often lead to segmentation errors, we design a local neighborhood diffusion module (LND). This module achieves precise edge segmentation by enhancing target region features and suppressing non-target region features. Extensive experiments on benchmark datasets such as S3DIS, ScanNetV2 and SemanticKITTI demonstrate the superior segmentation performance of the proposed SmartPoints.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104486"},"PeriodicalIF":2.8,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parametric model fitting for textured and animatable 3D avatar from a single frontal image of a clothed human 参数模型拟合纹理和动画的三维化身从单一的正面图像穿衣服的人
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-11 DOI: 10.1016/j.cag.2025.104478
Fares Mallek, Carlos Vázquez, Eric Paquette
In this paper, we tackle the challenge of three-dimensional estimation of expressive, animatable, and textured human avatars from a single frontal image. Leveraging a Skinned Multi-Person Linear (SMPL) parametric body model, we adjust the model parameters to faithfully reflect the shape and pose of the individual, relying on the mesh generated by a Pixel-aligned Implicit Function (PIFu) model. To robustly infer the SMPL parameters, we deploy a multi-step optimization process. Initially, we recover the position of 2D joints using an existing pose estimation tool. Subsequently, we utilize the 3D PIFu mesh together with the 2D pose to estimate the 3D position of joints. In the subsequent step, we adapt the body’s parametric model to the 3D joints through rigid alignment, optimizing for global translation and rotation. This step provides a robust initialization for further refinement of shape and pose parameters. The next step involves optimizing the pose and the first component of the SMPL shape parameters while imposing constraints to enhance model robustness. We then refine the SMPL model pose and shape parameters by adding two new registration loss terms to the optimization cost function: a point-to-surface distance and a Chamfer distance. Finally, we introduce a refinement process utilizing a deformation vector field applied to the SMPL mesh, enabling more faithful modeling of tight to loose clothing geometry. As most other works, we optimize based on images of people wearing shoes, resulting in artifacts in the toes region of SMPL. We thus introduce a new shoe-like mesh topology which greatly improves the quality of the reconstructed feet. A notable advantage of our approach is the ability to generate detailed avatars with fewer vertices compared to previous research, enhancing computational efficiency while maintaining high fidelity. We also demonstrate how to gain even more details, while maintaining the advantages of SMPL. To complete our model, we design a texture extraction and completion approach. Our entirely automated approach was evaluated against recognized benchmarks, X-Avatar and PeopleSnapshot, showcasing competitive performance against state-of-the-art methods. This approach contributes to advancing 3D modeling techniques, particularly in the realms of interactive applications, animation, and video games. We will make our code and our improved SMPL mesh topology available to the community: https://github.com/ETS-BodyModeling/ImplicitParametricAvatar.
在本文中,我们解决了从单个正面图像中对富有表现力、可动画化和纹理化的人类化身进行三维估计的挑战。利用蒙皮多人线性(SMPL)参数化身体模型,我们根据像素对齐隐函数(PIFu)模型生成的网格,调整模型参数以忠实地反映个体的形状和姿态。为了稳健地推断SMPL参数,我们部署了一个多步骤优化过程。首先,我们使用现有的姿态估计工具恢复2D关节的位置。随后,我们利用三维PIFu网格和二维姿态来估计关节的三维位置。在接下来的步骤中,我们通过刚性对准,优化全局平移和旋转,使身体的参数化模型适应三维关节。这一步为进一步细化形状和姿态参数提供了一个健壮的初始化。下一步包括优化姿态和SMPL形状参数的第一个组成部分,同时施加约束以增强模型的鲁棒性。然后,我们通过向优化成本函数中添加两个新的配准损失项:点到面距离和倒角距离,来改进SMPL模型的姿态和形状参数。最后,我们介绍了一种利用变形向量场应用于SMPL网格的细化过程,从而能够更忠实地建模紧身和宽松的服装几何形状。与大多数其他工作一样,我们基于穿鞋的人的图像进行优化,导致SMPL的脚趾区域出现伪影。因此,我们引入了一种新的类似鞋子的网格拓扑结构,大大提高了重建脚的质量。与以前的研究相比,我们的方法的一个显著优势是能够用更少的顶点生成详细的头像,在保持高保真度的同时提高计算效率。我们还将演示如何在保持SMPL优势的同时获得更多细节。为了完善我们的模型,我们设计了一种纹理提取和补全方法。我们的完全自动化的方法经过了X-Avatar和PeopleSnapshot等公认基准的评估,展示了与最先进的方法相比具有竞争力的性能。这种方法有助于推进3D建模技术,特别是在交互式应用程序、动画和视频游戏领域。我们将把我们的代码和改进的SMPL网格拓扑提供给社区:https://github.com/ETS-BodyModeling/ImplicitParametricAvatar。
{"title":"Parametric model fitting for textured and animatable 3D avatar from a single frontal image of a clothed human","authors":"Fares Mallek,&nbsp;Carlos Vázquez,&nbsp;Eric Paquette","doi":"10.1016/j.cag.2025.104478","DOIUrl":"10.1016/j.cag.2025.104478","url":null,"abstract":"<div><div>In this paper, we tackle the challenge of three-dimensional estimation of expressive, animatable, and textured human avatars from a single frontal image. Leveraging a Skinned Multi-Person Linear (SMPL) parametric body model, we adjust the model parameters to faithfully reflect the shape and pose of the individual, relying on the mesh generated by a Pixel-aligned Implicit Function (PIFu) model. To robustly infer the SMPL parameters, we deploy a multi-step optimization process. Initially, we recover the position of 2D joints using an existing pose estimation tool. Subsequently, we utilize the 3D PIFu mesh together with the 2D pose to estimate the 3D position of joints. In the subsequent step, we adapt the body’s parametric model to the 3D joints through rigid alignment, optimizing for global translation and rotation. This step provides a robust initialization for further refinement of shape and pose parameters. The next step involves optimizing the pose and the first component of the SMPL shape parameters while imposing constraints to enhance model robustness. We then refine the SMPL model pose and shape parameters by adding two new registration loss terms to the optimization cost function: a point-to-surface distance and a Chamfer distance. Finally, we introduce a refinement process utilizing a deformation vector field applied to the SMPL mesh, enabling more faithful modeling of tight to loose clothing geometry. As most other works, we optimize based on images of people wearing shoes, resulting in artifacts in the toes region of SMPL. We thus introduce a new shoe-like mesh topology which greatly improves the quality of the reconstructed feet. A notable advantage of our approach is the ability to generate detailed avatars with fewer vertices compared to previous research, enhancing computational efficiency while maintaining high fidelity. We also demonstrate how to gain even more details, while maintaining the advantages of SMPL. To complete our model, we design a texture extraction and completion approach. Our entirely automated approach was evaluated against recognized benchmarks, X-Avatar and PeopleSnapshot, showcasing competitive performance against state-of-the-art methods. This approach contributes to advancing 3D modeling techniques, particularly in the realms of interactive applications, animation, and video games. We will make our code and our improved SMPL mesh topology available to the community: <span><span>https://github.com/ETS-BodyModeling/ImplicitParametricAvatar</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104478"},"PeriodicalIF":2.8,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MonoNeRF-DDP: Neural radiance fields from monocular endoscopic images with dense depth priors MonoNeRF-DDP:具有密集深度先验的单眼内窥镜图像的神经辐射场
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-11 DOI: 10.1016/j.cag.2025.104487
Jinhua Liu , Dongjin Huang , Yongsheng Shi , Jiantao Qu
Synthesizing novel views from monocular endoscopic images is challenging due to sparse input views, occlusion of invalid regions, and soft tissue deformation. To tackle these challenges, we propose the neural radiance fields from monocular endoscopic images with dense depth priors, called MonoNeRF-DDP. The algorithm consists of two parts: preprocessing and normative depth-assisted reconstruction. In the preprocessing part, we use labelme to obtain mask images for invalid regions in endoscopy images, preventing their reconstruction. Then, to address the view sparsity problem, we fine-tuned a monocular depth estimation network to predict dense depth maps, enabling the recovery of scene depth information from sparse views during the neural radiance fields optimization process. In the normative depth-assisted reconstruction, to deal with the issues of soft tissue deformation and inaccurate depth information, we adopt neural radiance fields for dynamic scenes to take mask images and dense depth maps as additional inputs and utilize the proposed adaptive loss function to achieve self-supervised training. Experimental results show that MonoNeRF-DDP outperforms the best average values of competing algorithms across the real monocular endoscopic image dataset GastroSynth. MonoNeRF-DDP can reconstruct structurally accurate shapes, fine details, and highly realistic textures with only about 15 input images. Furthermore, a study of 14 medical-related participants indicates that MonoNeRF-DDP can more accurately observe the details of the disease sites and make more reliable preoperative diagnoses.
由于输入视图稀疏、无效区域遮挡和软组织变形,从单眼内窥镜图像合成新视图具有挑战性。为了解决这些挑战,我们提出了具有密集深度先验的单眼内窥镜图像的神经辐射场,称为MonoNeRF-DDP。该算法由预处理和规范深度辅助重建两部分组成。在预处理部分,我们使用标签对内窥镜图像中的无效区域进行掩码,防止其重构。然后,为了解决视图稀疏性问题,我们对单目深度估计网络进行了微调,以预测密集深度图,从而在神经辐射场优化过程中从稀疏视图中恢复场景深度信息。在规范的深度辅助重建中,为了解决软组织变形和深度信息不准确的问题,我们采用动态场景的神经辐射场,以掩模图像和密集深度图作为附加输入,利用提出的自适应损失函数实现自监督训练。实验结果表明,MonoNeRF-DDP在真实单眼内窥镜图像数据集GastroSynth上优于竞争算法的最佳平均值。MonoNeRF-DDP可以重建结构精确的形状,精细的细节,高度逼真的纹理,只有大约15个输入图像。此外,一项对14名医学相关参与者的研究表明,MonoNeRF-DDP可以更准确地观察疾病部位的细节,做出更可靠的术前诊断。
{"title":"MonoNeRF-DDP: Neural radiance fields from monocular endoscopic images with dense depth priors","authors":"Jinhua Liu ,&nbsp;Dongjin Huang ,&nbsp;Yongsheng Shi ,&nbsp;Jiantao Qu","doi":"10.1016/j.cag.2025.104487","DOIUrl":"10.1016/j.cag.2025.104487","url":null,"abstract":"<div><div>Synthesizing novel views from monocular endoscopic images is challenging due to sparse input views, occlusion of invalid regions, and soft tissue deformation. To tackle these challenges, we propose the neural radiance fields from monocular endoscopic images with dense depth priors, called MonoNeRF-DDP. The algorithm consists of two parts: preprocessing and normative depth-assisted reconstruction. In the preprocessing part, we use labelme to obtain mask images for invalid regions in endoscopy images, preventing their reconstruction. Then, to address the view sparsity problem, we fine-tuned a monocular depth estimation network to predict dense depth maps, enabling the recovery of scene depth information from sparse views during the neural radiance fields optimization process. In the normative depth-assisted reconstruction, to deal with the issues of soft tissue deformation and inaccurate depth information, we adopt neural radiance fields for dynamic scenes to take mask images and dense depth maps as additional inputs and utilize the proposed adaptive loss function to achieve self-supervised training. Experimental results show that MonoNeRF-DDP outperforms the best average values of competing algorithms across the real monocular endoscopic image dataset GastroSynth. MonoNeRF-DDP can reconstruct structurally accurate shapes, fine details, and highly realistic textures with only about 15 input images. Furthermore, a study of 14 medical-related participants indicates that MonoNeRF-DDP can more accurately observe the details of the disease sites and make more reliable preoperative diagnoses.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104487"},"PeriodicalIF":2.8,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automating visual narratives: Learning cinematic camera perspectives from 3D human interaction 自动化视觉叙事:从3D人类互动中学习电影摄像机视角
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-10 DOI: 10.1016/j.cag.2025.104484
Boyuan Cheng, Shang Ni, Jian Jun Zhang, Xiaosong Yang
Cinematic camera control is essential for guiding audience attention and conveying narrative intent, yet current data-driven methods largely rely on predefined visual datasets and handcrafted rules, limiting generalization and creativity. This paper introduces a novel diffusion-based framework that generates camera trajectories directly from two-character 3D motion sequences, eliminating the need for paired video–camera annotations. The approach leverages Toric features to encode spatial relations between characters and conditions the diffusion process through a dual-stream motion encoder and interaction module, enabling the camera to adapt dynamically to evolving character interactions. A new dataset linking character motion with camera parameters is constructed to train and evaluate the model. Experiments demonstrate that our method outperforms strong baselines in both quantitative metrics and perceptual quality, producing camera motions that are smooth, temporally coherent, and compositionally consistent with cinematic conventions. This work opens new opportunities for automating virtual cinematography in animation, gaming, and interactive media.
电影镜头控制对于引导观众注意力和传达叙事意图至关重要,但目前的数据驱动方法主要依赖于预定义的视觉数据集和手工制作的规则,限制了泛化和创造力。本文介绍了一种新的基于扩散的框架,该框架直接从两个字符的3D运动序列中生成摄像机轨迹,从而消除了对摄像机配对注释的需要。该方法利用Toric特征对角色之间的空间关系进行编码,并通过双流运动编码器和交互模块调节扩散过程,使摄像机能够动态适应不断变化的角色交互。建立了一个新的数据集,将人物运动与摄像机参数联系起来,对模型进行训练和评估。实验表明,我们的方法在定量指标和感知质量方面都优于强基线,产生的摄像机运动平滑,时间连贯,并且在构图上与电影惯例一致。这项工作为动画、游戏和互动媒体中的虚拟电影自动化开辟了新的机会。
{"title":"Automating visual narratives: Learning cinematic camera perspectives from 3D human interaction","authors":"Boyuan Cheng,&nbsp;Shang Ni,&nbsp;Jian Jun Zhang,&nbsp;Xiaosong Yang","doi":"10.1016/j.cag.2025.104484","DOIUrl":"10.1016/j.cag.2025.104484","url":null,"abstract":"<div><div>Cinematic camera control is essential for guiding audience attention and conveying narrative intent, yet current data-driven methods largely rely on predefined visual datasets and handcrafted rules, limiting generalization and creativity. This paper introduces a novel diffusion-based framework that generates camera trajectories directly from two-character 3D motion sequences, eliminating the need for paired video–camera annotations. The approach leverages Toric features to encode spatial relations between characters and conditions the diffusion process through a dual-stream motion encoder and interaction module, enabling the camera to adapt dynamically to evolving character interactions. A new dataset linking character motion with camera parameters is constructed to train and evaluate the model. Experiments demonstrate that our method outperforms strong baselines in both quantitative metrics and perceptual quality, producing camera motions that are smooth, temporally coherent, and compositionally consistent with cinematic conventions. This work opens new opportunities for automating virtual cinematography in animation, gaming, and interactive media.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104484"},"PeriodicalIF":2.8,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to special section on expressive media 表现性媒体专题部分前言
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-09 DOI: 10.1016/j.cag.2025.104483
Chiara Eva Catalano , Amal Dev Parakkat , Marc Christie
{"title":"Foreword to special section on expressive media","authors":"Chiara Eva Catalano ,&nbsp;Amal Dev Parakkat ,&nbsp;Marc Christie","doi":"10.1016/j.cag.2025.104483","DOIUrl":"10.1016/j.cag.2025.104483","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104483"},"PeriodicalIF":2.8,"publicationDate":"2025-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voice of artifacts: Evaluating user preferences for artifact voice in VR museums 人工制品的声音:评估VR博物馆中人工制品声音的用户偏好
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-08 DOI: 10.1016/j.cag.2025.104473
Bingqing Chen , Wenqi Chu , Xubo Yang , Yue Li
Voice is a powerful medium for conveying personality, emotion, and social presence, yet its role in cultural contexts such as virtual museums remains underexplored. While prior research in virtual reality (VR) has focused on ambient soundscapes or system-driven narration, little is known about what kinds of artifact voices users actually prefer, or if customized voices influence their experience. In this study, we designed a virtual museum and examined user perceptions of three types of voices for artifact chatbots, including a neutral synthetic voice (default), a socially relatable voice (familiar), and a user-customized voice with adjustable elements (customized). Through a within-subjects experiment, we measured user experience with established scales and a semi-structured interview. Results showed a strong user preference for the customized voice, which significantly outperformed the other two conditions. These findings suggest that users not only expect artifacts to speak, but also prefer to have control over the voices, which can enhance their experience and engagement. Our findings provide empirical evidence for the importance of voice customization in virtual museums and lay the groundwork for future design of interactive, user-centered sound and vocal experiences in VR environments.
声音是一种传达个性、情感和社会存在的强大媒介,但它在虚拟博物馆等文化背景中的作用仍未得到充分探索。虽然之前的虚拟现实(VR)研究主要集中在环境声景或系统驱动的叙事上,但对于用户真正喜欢哪种人工声音,或者定制的声音是否会影响他们的体验,我们知之甚少。在这项研究中,我们设计了一个虚拟博物馆,并研究了用户对人工聊天机器人三种类型声音的感知,包括中性合成声音(默认)、社会相关声音(熟悉)和用户自定义的可调元素声音(定制)。通过受试者内部实验,我们用既定的量表和半结构化访谈来测量用户体验。结果显示,用户对定制语音有强烈的偏好,显著优于其他两种情况。这些发现表明,用户不仅期待人工制品说话,而且更喜欢控制声音,这可以增强他们的体验和参与度。我们的研究结果为语音定制在虚拟博物馆中的重要性提供了实证证据,并为未来在VR环境中设计交互式、以用户为中心的声音和语音体验奠定了基础。
{"title":"Voice of artifacts: Evaluating user preferences for artifact voice in VR museums","authors":"Bingqing Chen ,&nbsp;Wenqi Chu ,&nbsp;Xubo Yang ,&nbsp;Yue Li","doi":"10.1016/j.cag.2025.104473","DOIUrl":"10.1016/j.cag.2025.104473","url":null,"abstract":"<div><div>Voice is a powerful medium for conveying personality, emotion, and social presence, yet its role in cultural contexts such as virtual museums remains underexplored. While prior research in virtual reality (VR) has focused on ambient soundscapes or system-driven narration, little is known about what kinds of artifact voices users actually prefer, or if customized voices influence their experience. In this study, we designed a virtual museum and examined user perceptions of three types of voices for artifact chatbots, including a neutral synthetic voice (<em>default</em>), a socially relatable voice (<em>familiar</em>), and a user-customized voice with adjustable elements (<em>customized</em>). Through a within-subjects experiment, we measured user experience with established scales and a semi-structured interview. Results showed a strong user preference for the <em>customized</em> voice, which significantly outperformed the other two conditions. These findings suggest that users not only expect artifacts to speak, but also prefer to have control over the voices, which can enhance their experience and engagement. Our findings provide empirical evidence for the importance of voice customization in virtual museums and lay the groundwork for future design of interactive, user-centered sound and vocal experiences in VR environments.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104473"},"PeriodicalIF":2.8,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detail Enhancement Gaussian Avatar: High-quality head avatars modeling 细节增强高斯头像:高质量的头部头像建模
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-08 DOI: 10.1016/j.cag.2025.104482
Zhangjin Huang, Bowei Yin
Modeling animatable head avatars from monocular video is a long-standing and challenging problem. Although recent approaches based on 3D Gaussian Splatting (3DGS) have achieved notable progress, the rendered avatars still exhibit several limitations. First, conventional 3DMM priors lack explicit geometric modeling for the eyes and teeth, leading to missing or suboptimal Gaussian initialization in these regions. Second, the heterogeneous characteristics of different facial subregions cause uniform joint training to under-optimize fine-scale details. Third, typical 3DGS issues such as boundary floaters and rendering artifacts remain unresolved in facial Gaussian representations. To address these challenges, we propose Detail Enhancement Gaussian Avatar (DEGA). (1) We augment Gaussian initialization with explicit eye and teeth regions, filling structural gaps left by standard 3DMM-based setups. (2) We introduce a hierarchical Gaussian representation that refines and decomposes the face into semantically aware subregions, enabling more thorough supervision and balanced optimization across all facial areas. (3) We incorporate a learned confidence attribute to suppress unreliable Gaussians, effectively mitigating boundary artifacts and floater phenomena. Overall, DEGA produces lifelike, dynamically expressive head avatars with high-fidelity geometry and appearance. Experiments on public benchmarks demonstrate that our method consistently outperforms state-of-the-art baselines.
从单目视频中建模可动画的头部头像是一个长期存在且具有挑战性的问题。尽管最近基于3D高斯飞溅(3DGS)的方法已经取得了显著的进展,但渲染的化身仍然表现出一些局限性。首先,传统的3DMM先验缺乏对眼睛和牙齿的显式几何建模,导致这些区域的高斯初始化缺失或次优。其次,不同面部子区域的异质性特征导致统一的联合训练对精细尺度细节优化不足。第三,典型的3DGS问题,如边界浮动和渲染伪影在面部高斯表示中仍然没有解决。为了解决这些挑战,我们提出了细节增强高斯头像(DEGA)。(1)我们用明确的眼睛和牙齿区域增强高斯初始化,填补基于标准3dmm的设置留下的结构空白。(2)我们引入了一种分层高斯表示,将人脸细化并分解为语义感知的子区域,从而在所有面部区域实现更彻底的监督和平衡优化。(3)引入学习置信度属性来抑制不可靠高斯分布,有效缓解边界伪像和浮子现象。总的来说,DEGA生产栩栩如生,动态表达头像高保真几何和外观。在公共基准测试上的实验表明,我们的方法始终优于最先进的基线。
{"title":"Detail Enhancement Gaussian Avatar: High-quality head avatars modeling","authors":"Zhangjin Huang,&nbsp;Bowei Yin","doi":"10.1016/j.cag.2025.104482","DOIUrl":"10.1016/j.cag.2025.104482","url":null,"abstract":"<div><div>Modeling animatable head avatars from monocular video is a long-standing and challenging problem. Although recent approaches based on 3D Gaussian Splatting (3DGS) have achieved notable progress, the rendered avatars still exhibit several limitations. First, conventional 3DMM priors lack explicit geometric modeling for the eyes and teeth, leading to missing or suboptimal Gaussian initialization in these regions. Second, the heterogeneous characteristics of different facial subregions cause uniform joint training to under-optimize fine-scale details. Third, typical 3DGS issues such as boundary floaters and rendering artifacts remain unresolved in facial Gaussian representations. To address these challenges, we propose <strong>Detail Enhancement Gaussian Avatar (DEGA)</strong>. (1) We augment Gaussian initialization with explicit eye and teeth regions, filling structural gaps left by standard 3DMM-based setups. (2) We introduce a hierarchical Gaussian representation that refines and decomposes the face into semantically aware subregions, enabling more thorough supervision and balanced optimization across all facial areas. (3) We incorporate a learned confidence attribute to suppress unreliable Gaussians, effectively mitigating boundary artifacts and floater phenomena. Overall, DEGA produces lifelike, dynamically expressive head avatars with high-fidelity geometry and appearance. Experiments on public benchmarks demonstrate that our method consistently outperforms state-of-the-art baselines.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104482"},"PeriodicalIF":2.8,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1