首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
A Multidimensional Assessment Method for Visualization Understanding (MdamV). 一种面向可视化理解的多维评估方法。
IF 6.5 Pub Date : 2026-01-14 DOI: 10.1109/TVCG.2026.3653265
Antonia Saske, Laura Koesten, Torsten Moller, Judith Staudner, Sylvia Kritzinger

How audiences read, interpret, and critique data visualizations is mainly assessed through performance tests featuring tasks like value retrieval. Yet, other factors shown to shape visualization understanding, such as numeracy, graph familiarity, and aesthetic perception, remain underrepresented in existing instruments. To address this, we design and test a Multidimensional Assessment Method for Visualization Understanding (MdamV). This method integrates task-based measures with self-perceived ability ratings and open-ended critique, applied directly to the visualizations being read. Grounded in learning sciences frameworks that view understanding as a multifaceted process, MdamV spans six dimensions: Comprehending, Decoding, Aestheticizing, Critiquing, Reading, and Contextualizing. Validation was supported by a survey (N=438) representative of Austria's population (ages18-74, male/female split), using a line chart and a bar chart on climate data. Findings show, for example, that about a quarter of respondents indicate deficits in comprehending simple data units, roughly one in five people felt unfamiliar with each chart type, and self-assessed numeracy was significantly related to data reading performance (p=0.0004). Overall, the evaluation of MdamV demonstrates the value of assessing visualization understanding beyond performance, framing it as a situated process tied to particular visualizations.

受众如何阅读、解释和评价数据可视化主要是通过性能测试来评估的,这些测试包括诸如值检索之类的任务。然而,其他影响可视化理解的因素,如计算能力、图形熟悉度和美感,在现有仪器中仍未得到充分体现。为了解决这个问题,我们设计并测试了可视化理解的多维评估方法(MdamV)。这种方法将基于任务的测量与自我感知能力评级和开放式批评相结合,直接应用于正在阅读的可视化。MdamV以学习科学框架为基础,将理解视为一个多方面的过程,涵盖六个维度:理解、解码、审美化、批评、阅读和情境化。通过对奥地利人口(年龄18-74岁,男女分开)的一项调查(N=438),使用气候数据的折线图和条形图来支持验证。例如,调查结果显示,大约四分之一的受访者表示在理解简单数据单元方面存在缺陷,大约五分之一的人对每种图表类型都感到不熟悉,自我评估的计算能力与数据阅读性能显著相关(p=0.0004)。总的来说,MdamV的评估证明了评估可视化理解超越性能的价值,将其构建为与特定可视化相关联的定位过程。
{"title":"A Multidimensional Assessment Method for Visualization Understanding (MdamV).","authors":"Antonia Saske, Laura Koesten, Torsten Moller, Judith Staudner, Sylvia Kritzinger","doi":"10.1109/TVCG.2026.3653265","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3653265","url":null,"abstract":"<p><p>How audiences read, interpret, and critique data visualizations is mainly assessed through performance tests featuring tasks like value retrieval. Yet, other factors shown to shape visualization understanding, such as numeracy, graph familiarity, and aesthetic perception, remain underrepresented in existing instruments. To address this, we design and test a Multidimensional Assessment Method for Visualization Understanding (MdamV). This method integrates task-based measures with self-perceived ability ratings and open-ended critique, applied directly to the visualizations being read. Grounded in learning sciences frameworks that view understanding as a multifaceted process, MdamV spans six dimensions: Comprehending, Decoding, Aestheticizing, Critiquing, Reading, and Contextualizing. Validation was supported by a survey (N=438) representative of Austria's population (ages18-74, male/female split), using a line chart and a bar chart on climate data. Findings show, for example, that about a quarter of respondents indicate deficits in comprehending simple data units, roughly one in five people felt unfamiliar with each chart type, and self-assessed numeracy was significantly related to data reading performance (p=0.0004). Overall, the evaluation of MdamV demonstrates the value of assessing visualization understanding beyond performance, framing it as a situated process tied to particular visualizations.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Skinning: Kinematics-Driven Cartoon Effects for Articulated Characters. 动态蒙皮:运动驱动的卡通效果铰接式人物。
IF 6.5 Pub Date : 2026-01-14 DOI: 10.1109/TVCG.2026.3653317
Damien Rohmer, Karim Salem, Niranjan Kalyanasundaram, Victor Zordan

We present an extension to traditional rig skinning, like Linear Blend Skinning (LBS), to produce secondary motions that exhibit the appearance of a physical phenomena without need for simulation.At the core of the technique, we call dynamic skinning, is a set of deformers which offset position of individual vertices as a function of position derivatives and time.Examples of such deformers create effects such as oscillation in response to movement and the appearance of wave propagation, among others. Because the technique computes offsets directly and does not solve physics equations, it is extremely fast to compute. It also boasts a highdegree of customizability which supports a desirable artist workflow and fine level of control. Finally, we showcase the technique in a number of scenarios and make comparisons with the state of the art.

我们提出了传统钻机蒙皮的扩展,如线性混合蒙皮(LBS),以产生显示物理现象外观的二次运动,而无需模拟。该技术的核心,我们称之为动态蒙皮,是一组变形器,它将单个顶点的位置作为位置导数和时间的函数进行偏移。这种变形器的例子产生诸如响应运动的振荡和波传播的外观等效果。由于该技术直接计算偏移量而不求解物理方程,因此计算速度非常快。它还拥有高度的可定制性,支持理想的艺术家工作流程和精细的控制水平。最后,我们将在许多场景中展示该技术,并与目前的技术状态进行比较。
{"title":"Dynamic Skinning: Kinematics-Driven Cartoon Effects for Articulated Characters.","authors":"Damien Rohmer, Karim Salem, Niranjan Kalyanasundaram, Victor Zordan","doi":"10.1109/TVCG.2026.3653317","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3653317","url":null,"abstract":"<p><p>We present an extension to traditional rig skinning, like Linear Blend Skinning (LBS), to produce secondary motions that exhibit the appearance of a physical phenomena without need for simulation.At the core of the technique, we call dynamic skinning, is a set of deformers which offset position of individual vertices as a function of position derivatives and time.Examples of such deformers create effects such as oscillation in response to movement and the appearance of wave propagation, among others. Because the technique computes offsets directly and does not solve physics equations, it is extremely fast to compute. It also boasts a highdegree of customizability which supports a desirable artist workflow and fine level of control. Finally, we showcase the technique in a number of scenarios and make comparisons with the state of the art.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ESGaussianFace: Emotional and Stylized Audio-Driven Facial Animation Via 3D Gaussian Splatting. ESGaussianFace:情感和风格化的音频驱动的面部动画,通过3D高斯飞溅。
IF 6.5 Pub Date : 2026-01-12 DOI: 10.1109/TVCG.2026.3651640
Chuhang Ma, Shuai Tan, Ye Pan, Jiaolong Yang, Xin Tong

Most current audio-driven facial animation research primarily focuses on generating videos with neutral emotions. While some studies have addressed the generation of facial videos driven by emotional audio, efficiently generating high-quality talking head videos that integrate both emotional expressions and style features remains a significant challenge. In this paper, we propose ESGaussianFace, an innovative framework for emotional and stylized audio-driven facial animation. Our approach leverages 3D Gaussian Splatting to reconstruct 3D scenes and render videos, ensuring efficient generation of 3D consistent results. We propose an emotion-audio-guided spatial attention method that effectively integrates emotion features with audio content features. Through emotion-guided attention, the model is able to reconstruct facial details across different emotional states more accurately. To achieve emotional and stylized deformations of the 3D Gaussian points through emotion and style features, we introduce two 3D Gaussian deformation predictors. Futhermore, we propose a multi-stage training strategy, enabling the step-by-step learning of the character's lip movements, emotional variations, and style features. Our generated results exhibit high efficiency, high quality, and 3D consistency. Extensive experimental results demonstrate that our method outperforms existing state-of-the-art techniques in terms of lip movement accuracy, expression variation, and style feature expressiveness.

目前大多数音频驱动的面部动画研究主要集中在生成具有中性情绪的视频。虽然一些研究已经解决了由情感音频驱动的面部视频的生成问题,但有效地生成高质量的融合了情感表达和风格特征的谈话视频仍然是一个重大挑战。在本文中,我们提出了ESGaussianFace,这是一个创新的情感和风格化音频驱动的面部动画框架。我们的方法利用3D高斯飞溅来重建3D场景和渲染视频,确保有效地生成3D一致的结果。我们提出了一种情感-音频引导的空间注意方法,有效地将情感特征与音频内容特征相结合。通过情绪引导注意,该模型能够更准确地重建不同情绪状态下的面部细节。为了通过情感和风格特征实现三维高斯点的情感化和程式化变形,我们引入了两个三维高斯变形预测器。此外,我们提出了一个多阶段的训练策略,使逐步学习角色的嘴唇运动,情绪变化和风格特征。我们生成的结果具有高效率、高质量和3D一致性。大量的实验结果表明,我们的方法在唇动精度、表情变化和风格特征表达方面优于现有的最先进的技术。
{"title":"ESGaussianFace: Emotional and Stylized Audio-Driven Facial Animation Via 3D Gaussian Splatting.","authors":"Chuhang Ma, Shuai Tan, Ye Pan, Jiaolong Yang, Xin Tong","doi":"10.1109/TVCG.2026.3651640","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3651640","url":null,"abstract":"<p><p>Most current audio-driven facial animation research primarily focuses on generating videos with neutral emotions. While some studies have addressed the generation of facial videos driven by emotional audio, efficiently generating high-quality talking head videos that integrate both emotional expressions and style features remains a significant challenge. In this paper, we propose ESGaussianFace, an innovative framework for emotional and stylized audio-driven facial animation. Our approach leverages 3D Gaussian Splatting to reconstruct 3D scenes and render videos, ensuring efficient generation of 3D consistent results. We propose an emotion-audio-guided spatial attention method that effectively integrates emotion features with audio content features. Through emotion-guided attention, the model is able to reconstruct facial details across different emotional states more accurately. To achieve emotional and stylized deformations of the 3D Gaussian points through emotion and style features, we introduce two 3D Gaussian deformation predictors. Futhermore, we propose a multi-stage training strategy, enabling the step-by-step learning of the character's lip movements, emotional variations, and style features. Our generated results exhibit high efficiency, high quality, and 3D consistency. Extensive experimental results demonstrate that our method outperforms existing state-of-the-art techniques in terms of lip movement accuracy, expression variation, and style feature expressiveness.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GeoTexDensifier: Geometry-Texture-Aware Densification for High-Quality Photorealistic 3D Gaussian Splatting. GeoTexDensifier:用于高质量逼真3D高斯飞溅的几何纹理感知密度。
IF 6.5 Pub Date : 2026-01-12 DOI: 10.1109/TVCG.2025.3644697
Hanqing Jiang, Xiaojun Xiang, Han Sun, Hongjie Li, Liyang Zhou, Xiaoyu Zhang, Guofeng Zhang

3D Gaussian Splatting (3DGS) has recently attracted wide attentions in various areas such as 3D navigation, Virtual Reality (VR) and 3D simulation, due to its photorealistic and efficient rendering performance. High-quality reconstrution of 3DGS relies on sufficient splats and a reasonable distribution of these splats to fit real geometric surface and texture details, which turns out to be a challenging problem. We present GeoTexDensifier, a novel geometry-texture-aware densification strategy to reconstruct high-quality Gaussian splats which better comply with the geometric structure and texture richness of the scene. Specifically, our GeoTexDensifier framework carries out an auxiliary texture-aware densification method to produce a denser distribution of splats in fully textured areas, while keeping sparsity in low-texture regions to maintain the quality of Gaussian point cloud. Meanwhile, a geometry-aware splitting strategy takes depth and normal priors to guide the splitting sampling and filter out the noisy splats whose initial positions are far from the actual geometric surfaces they aim to fit, under a Validation of Depth Ratio Change checking. With the help of relative monocular depth prior, such geometry-aware validation can effectively reduce the influence of scattered Gaussians to the final rendering quality, especially in regions with weak textures or without sufficient training views. The texture-aware densification and geometry-aware splitting strategies are fully combined to obtain a set of high-quality Gaussian splats. We experiment our GeoTexDensifier framework on various datasets and compare our Novel View Synthesis results to other state-of-the-art 3DGS approaches, with detailed quantitative and qualitative evaluations to demonstrate the effectiveness of our method in producing more photorealistic 3DGS models.

近年来,三维高斯喷绘(3DGS)以其逼真、高效的渲染性能在三维导航、虚拟现实和三维仿真等领域受到了广泛关注。高质量的3DGS重建依赖于足够的碎片和这些碎片的合理分布,以适应真实的几何表面和纹理细节,这是一个具有挑战性的问题。提出了一种基于几何纹理感知的高密度化策略GeoTexDensifier,用于重建高质量的高斯条纹,使其更符合场景的几何结构和纹理丰富度。具体来说,我们的GeoTexDensifier框架执行了一种辅助的纹理感知致密化方法,在完全纹理区域产生更密集的碎片分布,同时在低纹理区域保持稀疏性,以保持高斯点云的质量。同时,一种几何感知的分割策略在深度比变化检验的验证下,利用深度和法向先验来指导分割采样,滤除初始位置远离实际拟合几何表面的噪声碎片。在相对单目深度先验的帮助下,这种几何感知的验证可以有效地减少散射高斯对最终渲染质量的影响,特别是在纹理较弱或没有足够训练视图的区域。充分结合纹理感知致密化和几何感知分裂策略,获得了一组高质量的高斯条纹。我们在各种数据集上实验了我们的GeoTexDensifier框架,并将我们的新颖视图合成结果与其他最先进的3DGS方法进行了比较,并进行了详细的定量和定性评估,以证明我们的方法在生成更逼真的3DGS模型方面的有效性。
{"title":"GeoTexDensifier: Geometry-Texture-Aware Densification for High-Quality Photorealistic 3D Gaussian Splatting.","authors":"Hanqing Jiang, Xiaojun Xiang, Han Sun, Hongjie Li, Liyang Zhou, Xiaoyu Zhang, Guofeng Zhang","doi":"10.1109/TVCG.2025.3644697","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3644697","url":null,"abstract":"<p><p>3D Gaussian Splatting (3DGS) has recently attracted wide attentions in various areas such as 3D navigation, Virtual Reality (VR) and 3D simulation, due to its photorealistic and efficient rendering performance. High-quality reconstrution of 3DGS relies on sufficient splats and a reasonable distribution of these splats to fit real geometric surface and texture details, which turns out to be a challenging problem. We present GeoTexDensifier, a novel geometry-texture-aware densification strategy to reconstruct high-quality Gaussian splats which better comply with the geometric structure and texture richness of the scene. Specifically, our GeoTexDensifier framework carries out an auxiliary texture-aware densification method to produce a denser distribution of splats in fully textured areas, while keeping sparsity in low-texture regions to maintain the quality of Gaussian point cloud. Meanwhile, a geometry-aware splitting strategy takes depth and normal priors to guide the splitting sampling and filter out the noisy splats whose initial positions are far from the actual geometric surfaces they aim to fit, under a Validation of Depth Ratio Change checking. With the help of relative monocular depth prior, such geometry-aware validation can effectively reduce the influence of scattered Gaussians to the final rendering quality, especially in regions with weak textures or without sufficient training views. The texture-aware densification and geometry-aware splitting strategies are fully combined to obtain a set of high-quality Gaussian splats. We experiment our GeoTexDensifier framework on various datasets and compare our Novel View Synthesis results to other state-of-the-art 3DGS approaches, with detailed quantitative and qualitative evaluations to demonstrate the effectiveness of our method in producing more photorealistic 3DGS models.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Distance-Aware Human-to-Human Interaction Motions From Text Guidance. 从文本引导生成距离感知的人与人之间的交互动作。
IF 6.5 Pub Date : 2026-01-12 DOI: 10.1109/TVCG.2026.3651382
Jia-Qi Zhang, Jia-Jun Wang, Fang-Lue Zhang, Miao Wang

The growing demand for diverse and realistic character animations in video games and films has driven the development of natural language-controlled motion generation systems. While recent advances in text-driven 3D human motion synthesis have made significant progress, generating realistic multi-person interactions remains a major challenge. Existing methods, such as denoising diffusion models and autoregressive frameworks, have explored interaction dynamics using attention mechanisms and causal modeling. However, they consistently overlook a critical physical constraint: the explicit spatial distance between interacting body parts, which is essential for producing semantically accurate and physically plausible interactions. To address this limitation, we propose InterDist, a novel masked generative Transformer model operating in a discrete state space. Our key idea is to decompose two-person motion into three components: two independent, interaction-agnostic single-person motion sequences and a separate interaction distance sequence. This formulation enables direct learning of both individual motion and dynamic spatial relationships from text prompts. We implement this via a VQ-VAE that jointly encodes independent motions and relative distances into discrete codebooks, followed by a bidirectional masked generative Transformer that models their joint distribution conditioned on text. To better align motion and language, we also introduce a cross-modal interaction module to enhance text-motion association. Our approach ensures the generated motions exhibit both semantic alignment with textual descriptions and preserving plausible inter-character distances, setting a new benchmark for text-driven multi-person interaction generation.

对视频游戏和电影中多样化和逼真的人物动画的需求日益增长,推动了自然语言控制运动生成系统的发展。虽然最近在文本驱动的3D人体运动合成方面取得了重大进展,但产生逼真的多人互动仍然是一个主要挑战。现有的方法,如去噪扩散模型和自回归框架,利用注意机制和因果模型探索了相互作用的动力学。然而,他们总是忽略了一个关键的物理约束:相互作用的身体部位之间明确的空间距离,这对于产生语义准确和物理上合理的相互作用至关重要。为了解决这一限制,我们提出了InterDist,这是一种在离散状态空间中运行的新型掩模生成变压器模型。我们的关键思想是将两个人的运动分解成三个部分:两个独立的、相互不可知的单人运动序列和一个单独的相互作用距离序列。这个公式可以从文本提示中直接学习个体运动和动态空间关系。我们通过VQ-VAE来实现这一点,该VQ-VAE将独立的运动和相对距离联合编码到离散的码本中,然后是一个双向屏蔽生成变压器,该变压器在文本条件下模拟它们的联合分布。为了更好地对齐动作和语言,我们还引入了一个跨模态交互模块来增强文本和动作的关联。我们的方法确保生成的动作既具有与文本描述的语义一致性,又保持可信的字符间距离,为文本驱动的多人交互生成设定了新的基准。
{"title":"Generating Distance-Aware Human-to-Human Interaction Motions From Text Guidance.","authors":"Jia-Qi Zhang, Jia-Jun Wang, Fang-Lue Zhang, Miao Wang","doi":"10.1109/TVCG.2026.3651382","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3651382","url":null,"abstract":"<p><p>The growing demand for diverse and realistic character animations in video games and films has driven the development of natural language-controlled motion generation systems. While recent advances in text-driven 3D human motion synthesis have made significant progress, generating realistic multi-person interactions remains a major challenge. Existing methods, such as denoising diffusion models and autoregressive frameworks, have explored interaction dynamics using attention mechanisms and causal modeling. However, they consistently overlook a critical physical constraint: the explicit spatial distance between interacting body parts, which is essential for producing semantically accurate and physically plausible interactions. To address this limitation, we propose InterDist, a novel masked generative Transformer model operating in a discrete state space. Our key idea is to decompose two-person motion into three components: two independent, interaction-agnostic single-person motion sequences and a separate interaction distance sequence. This formulation enables direct learning of both individual motion and dynamic spatial relationships from text prompts. We implement this via a VQ-VAE that jointly encodes independent motions and relative distances into discrete codebooks, followed by a bidirectional masked generative Transformer that models their joint distribution conditioned on text. To better align motion and language, we also introduce a cross-modal interaction module to enhance text-motion association. Our approach ensures the generated motions exhibit both semantic alignment with textual descriptions and preserving plausible inter-character distances, setting a new benchmark for text-driven multi-person interaction generation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Locally Adapted Reference Frame Fields using Moving Least Squares. 使用移动最小二乘法局部适应参考系场。
IF 6.5 Pub Date : 2026-01-12 DOI: 10.1109/TVCG.2025.3634845
Julio Rey Ramirez, Peter Rautek, Tobias Gunther, Markus Hadwiger

The detection and analysis of features in fluid flow are important tasks in fluid mechanics and flow visualization. One recent class of methods to approach this problem is to first compute objective optimal reference frames, relative to which the input vector field becomes as steady as possible. However, existing methods either optimize locally over a fixed neighborhood, which might not match the extent of interesting features well, or perform global optimization, which is costly. We propose a novel objective method for the computation of optimal reference frames that automatically adapts to the flow field locally, without having to choose neighborhoods a priori. We enable adaptivity by formulating this problem as a moving least squares approximation, through which we determine a continuous field of reference frames. To incorporate fluid features into the computation of the reference frame field, we introduce the use of a scalar guidance field into the moving least squares approximation. The guidance field determines a curved manifold on which a regularly sampled input vector field becomes a set of irregularly spaced samples, which then forms the input to the moving least squares approximation. Although the guidance field can be any scalar field, by using a field that corresponds to flow features the resulting reference frame field will adapt accordingly. We show that using an FTLE field as the guidance field results in a reference frame field that adapts better to local features in the flow than prior work. However, our moving least squares framework is formulated in a very general way, and therefore other types of guidance fields could be used in the future to adapt to local fluid features.

流体流动特征的检测和分析是流体力学和流动可视化的重要任务。最近一类解决这一问题的方法是首先计算目标最优参照系,相对于该参照系,输入向量场变得尽可能稳定。然而,现有的方法要么在固定的邻域上进行局部优化,这可能不能很好地匹配感兴趣的特征的范围,要么进行全局优化,这是昂贵的。我们提出了一种新的客观方法来计算最优参照系,该方法可以自动地局部适应流场,而不必先验地选择邻域。我们通过将这个问题表述为一个移动的最小二乘近似来实现自适应,通过它我们确定了一个连续的参考系场。为了将流体特征融入到参考系场的计算中,我们在移动最小二乘近似中引入了标量引导场的使用。引导场决定了一个弯曲流形,在这个流形上,一个规则采样的输入向量场变成一组不规则间隔的样本,然后形成移动最小二乘近似的输入。虽然引导场可以是任何标量场,但通过使用与流特征相对应的场,所得到的参照系场将相应地进行调整。我们表明,使用FTLE场作为引导场会产生一个参考框架场,该参考框架场比以前的工作更好地适应流中的局部特征。然而,我们的移动最小二乘框架是以一种非常通用的方式制定的,因此未来可以使用其他类型的引导场来适应局部流体特征。
{"title":"Locally Adapted Reference Frame Fields using Moving Least Squares.","authors":"Julio Rey Ramirez, Peter Rautek, Tobias Gunther, Markus Hadwiger","doi":"10.1109/TVCG.2025.3634845","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3634845","url":null,"abstract":"<p><p>The detection and analysis of features in fluid flow are important tasks in fluid mechanics and flow visualization. One recent class of methods to approach this problem is to first compute objective optimal reference frames, relative to which the input vector field becomes as steady as possible. However, existing methods either optimize locally over a fixed neighborhood, which might not match the extent of interesting features well, or perform global optimization, which is costly. We propose a novel objective method for the computation of optimal reference frames that automatically adapts to the flow field locally, without having to choose neighborhoods a priori. We enable adaptivity by formulating this problem as a moving least squares approximation, through which we determine a continuous field of reference frames. To incorporate fluid features into the computation of the reference frame field, we introduce the use of a scalar guidance field into the moving least squares approximation. The guidance field determines a curved manifold on which a regularly sampled input vector field becomes a set of irregularly spaced samples, which then forms the input to the moving least squares approximation. Although the guidance field can be any scalar field, by using a field that corresponds to flow features the resulting reference frame field will adapt accordingly. We show that using an FTLE field as the guidance field results in a reference frame field that adapts better to local features in the flow than prior work. However, our moving least squares framework is formulated in a very general way, and therefore other types of guidance fields could be used in the future to adapt to local fluid features.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SeparateGen: Semantic Component-based 3D Character Generation from Single Images. SeparateGen:从单个图像生成基于语义组件的3D角色。
IF 6.5 Pub Date : 2026-01-12 DOI: 10.1109/TVCG.2026.3652452
Dong-Yang Li, Yi-Long Liu, Zi-Xian Liu, Yan-Pei Cao, Meng-Hao Guo, Shi-Min Hu

Creating detailed 3D characters from a single image remains challenging due to the difficulty in separating semantic components during generation. Existing methods often produce entangled meshes with poor topology, hindering downstream applications like rigging and animation. We introduce SeparateGen, a novel framework that generates high-quality 3D characters by explicitly reconstructing them as distinct semantic components (e.g., body, clothing, hair, shoes) from a single, arbitrary-pose image. SeparateGen first leverages a multi-view diffusion model to generate consistent multi-view images in a canonical Apose. Then, a novel component-aware reconstruction model, SC-LRM, conditioned on these multi-view images, adaptively decomposes and reconstructs each component with high fidelity. To train and evaluate SeparateGen, we contribute SC-Anime, the first large-scale dataset of 7,580 anime-style 3D characters with detailed component-level annotations. Extensive experiments demonstrate that SeparateGen significantly outperforms stateof- the-art methods in both reconstruction quality and multiview consistency. Furthermore, our component-based approach effectively resolves mesh entanglement issues, enabling seamless rigging and asset reuse. SeparateGen thus represents a step towards generating high-quality, application-ready 3D characters from a single image. The SC-Anime dataset and our code will be publicly released.

由于在生成过程中难以分离语义组件,因此从单个图像创建详细的3D字符仍然具有挑战性。现有的方法通常会产生纠缠网格和较差的拓扑结构,阻碍下游应用,如索具和动画。我们引入了SeparateGen,这是一个新颖的框架,通过明确地将它们从单个任意姿态的图像中重构为不同的语义组件(例如,身体,衣服,头发,鞋子)来生成高质量的3D角色。SeparateGen首先利用一个多视图扩散模型在一个规范的环境中生成一致的多视图图像。然后,以这些多视图图像为条件,建立了一种新的构件感知重构模型SC-LRM,对每个构件进行高保真度自适应分解和重构。为了训练和评估SeparateGen,我们贡献了SC-Anime,这是第一个包含7580个动画风格3D角色的大规模数据集,具有详细的组件级注释。大量的实验表明,SeparateGen在重建质量和多视图一致性方面都明显优于最先进的方法。此外,我们基于组件的方法有效地解决了网格纠缠问题,实现了无缝装配和资产重用。因此,SeparateGen代表了从单个图像生成高质量,应用就绪的3D字符的一步。SC-Anime数据集和我们的代码将公开发布。
{"title":"SeparateGen: Semantic Component-based 3D Character Generation from Single Images.","authors":"Dong-Yang Li, Yi-Long Liu, Zi-Xian Liu, Yan-Pei Cao, Meng-Hao Guo, Shi-Min Hu","doi":"10.1109/TVCG.2026.3652452","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3652452","url":null,"abstract":"<p><p>Creating detailed 3D characters from a single image remains challenging due to the difficulty in separating semantic components during generation. Existing methods often produce entangled meshes with poor topology, hindering downstream applications like rigging and animation. We introduce SeparateGen, a novel framework that generates high-quality 3D characters by explicitly reconstructing them as distinct semantic components (e.g., body, clothing, hair, shoes) from a single, arbitrary-pose image. SeparateGen first leverages a multi-view diffusion model to generate consistent multi-view images in a canonical Apose. Then, a novel component-aware reconstruction model, SC-LRM, conditioned on these multi-view images, adaptively decomposes and reconstructs each component with high fidelity. To train and evaluate SeparateGen, we contribute SC-Anime, the first large-scale dataset of 7,580 anime-style 3D characters with detailed component-level annotations. Extensive experiments demonstrate that SeparateGen significantly outperforms stateof- the-art methods in both reconstruction quality and multiview consistency. Furthermore, our component-based approach effectively resolves mesh entanglement issues, enabling seamless rigging and asset reuse. SeparateGen thus represents a step towards generating high-quality, application-ready 3D characters from a single image. The SC-Anime dataset and our code will be publicly released.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SGGS: Semantic-Guided 3D Gaussian Splatting With Adaptive Rendering. SGGS:语义引导的3D高斯飞溅与自适应渲染。
IF 6.5 Pub Date : 2026-01-12 DOI: 10.1109/TVCG.2026.3650881
Annan Zhou, Li Wang, Jian Li, Jing Huang, Li Li, Jian Yao

3D Gaussian Splatting (3DGS) has shown great promise in a variety of applications due to its exceptional real-time rendering quality and explicit representation, leading to numerous improvements across various fields. However, existing methods lack consideration of main objects and important structural information in their overall optimization strategies. This results in blurring of main objects in adaptive rendering and the loss of high-frequency details on targets that are insufficiently captured. In this work, we introduce a semantic-guided 3DGS method with adaptive rendering, which optimizes important structures through the guidance of boundary Gaussians, while leveraging semantic features to enhance the rendering of main objects in adaptive rendering. Experiments show that the proposed semantic-guided method can enhance important structures and high-frequency information in corner regions without significantly increasing the total number of Gaussians. This method also improves the separability between objects. At the same time, a semantic-guided Level-of-Detail (LoD) rendering approach enables the rapid display of main targets and the rendering of a complete scene. The semantic-guided methodology we have presented exhibits compatibility with a range of existing techniques. The code, more experimental results, and online demo will be available at https://zhouannan.github.io/SGGS/.

3D高斯喷溅(3DGS)由于其卓越的实时渲染质量和显式表示,在各种应用中显示出巨大的前景,导致各个领域的许多改进。然而,现有方法在整体优化策略中缺乏对主要对象和重要结构信息的考虑。这将导致自适应渲染中主要对象的模糊和目标上高频细节的丢失,这些细节没有被充分捕获。本文引入了一种语义引导的3DGS自适应渲染方法,该方法通过边界高斯函数的引导对重要结构进行优化,同时利用语义特征增强自适应渲染中主要对象的渲染。实验表明,该方法可以在不显著增加高斯函数总数的情况下增强角区重要结构和高频信息。该方法还提高了对象之间的可分离性。同时,采用语义引导的细节层次(Level-of-Detail, LoD)绘制方法,实现了主要目标的快速显示和完整场景的绘制。我们提出的语义引导的方法显示了与一系列现有技术的兼容性。代码、更多的实验结果和在线演示将在https://zhouannan.github.io/SGGS/上提供。
{"title":"SGGS: Semantic-Guided 3D Gaussian Splatting With Adaptive Rendering.","authors":"Annan Zhou, Li Wang, Jian Li, Jing Huang, Li Li, Jian Yao","doi":"10.1109/TVCG.2026.3650881","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3650881","url":null,"abstract":"<p><p>3D Gaussian Splatting (3DGS) has shown great promise in a variety of applications due to its exceptional real-time rendering quality and explicit representation, leading to numerous improvements across various fields. However, existing methods lack consideration of main objects and important structural information in their overall optimization strategies. This results in blurring of main objects in adaptive rendering and the loss of high-frequency details on targets that are insufficiently captured. In this work, we introduce a semantic-guided 3DGS method with adaptive rendering, which optimizes important structures through the guidance of boundary Gaussians, while leveraging semantic features to enhance the rendering of main objects in adaptive rendering. Experiments show that the proposed semantic-guided method can enhance important structures and high-frequency information in corner regions without significantly increasing the total number of Gaussians. This method also improves the separability between objects. At the same time, a semantic-guided Level-of-Detail (LoD) rendering approach enables the rapid display of main targets and the rendering of a complete scene. The semantic-guided methodology we have presented exhibits compatibility with a range of existing techniques. The code, more experimental results, and online demo will be available at https://zhouannan.github.io/SGGS/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LAMDA: Aiding Visual Exploration of Atomic Displacements in Molecular Dynamics Simulations. 辅助分子动力学模拟中原子位移的视觉探索。
IF 6.5 Pub Date : 2026-01-12 DOI: 10.1109/TVCG.2026.3652905
Rostyslav Hnatyshyn, Danny Perez, Gerik Scheuermann, Ross Maciejewski, Baldwin Nsonga

Contemporary materials science research is heavily conducted in silico, involving massive simulations of the atomicscale evolution of materials. Cataloging basic patterns in the atomic displacements is key to understanding and predicting the evolution of physical properties. However, the combinatorial complexity of the space of possible transitions coupled with the overwhelming amount of data being produced by highthroughput simulations make such an analysis extremely challenging and time-consuming for domain experts. The development of visual analytics systems that facilitate the exploration of simulation data is an active field of research. While these systems excel in identifying temporal regions of interest, they treat each timestep of a simulation as an independent event without considering the behavior of the atomic displacements between timesteps. We address this gap by introducing LAMDA, a visual analytics system that allows domain experts to quickly and systematically explore state-to-state transitions. In LAMDA, transitions are hierarchically categorized, providing a basis for cataloging displacement behavior, as well as enabling the analysis of simulations at different resolutions, ranging from very broad qualitative classes of transitions to very narrow definitions of unit processes. LAMDA supports navigating the hierarchy of transitions, enabling scientists to visualize the commonalities between different transitions in each class in terms of invariant features characterizing local atomic environments, and LAMDA simplifies the analysis by capturing user inputs through annotations. We evaluate our system through a case study and report on findings from our domain experts.

当代材料科学研究主要是在计算机上进行的,涉及大量模拟材料的原子尺度演变。对原子位移的基本模式进行编目是理解和预测物理性质演变的关键。然而,可能转换空间的组合复杂性,加上高通量模拟产生的大量数据,使得这种分析对领域专家来说极具挑战性和耗时。可视化分析系统的发展促进了模拟数据的探索是一个活跃的研究领域。虽然这些系统在识别感兴趣的时间区域方面表现出色,但它们将模拟的每个时间步长视为独立事件,而不考虑时间步长之间原子位移的行为。我们通过引入LAMDA来解决这一差距,LAMDA是一个可视化分析系统,允许领域专家快速系统地探索状态到状态的转换。在LAMDA中,过渡是分层分类的,提供了对位移行为进行编目的基础,以及能够在不同分辨率下分析模拟,范围从非常广泛的定性过渡类别到非常狭窄的单位过程定义。LAMDA支持对转换的层次结构进行导航,使科学家能够根据描述局部原子环境的不变特征来可视化每个类中不同转换之间的共性,并且LAMDA通过注释捕获用户输入来简化分析。我们通过案例研究来评估我们的系统,并报告我们领域专家的发现。
{"title":"LAMDA: Aiding Visual Exploration of Atomic Displacements in Molecular Dynamics Simulations.","authors":"Rostyslav Hnatyshyn, Danny Perez, Gerik Scheuermann, Ross Maciejewski, Baldwin Nsonga","doi":"10.1109/TVCG.2026.3652905","DOIUrl":"https://doi.org/10.1109/TVCG.2026.3652905","url":null,"abstract":"<p><p>Contemporary materials science research is heavily conducted in silico, involving massive simulations of the atomicscale evolution of materials. Cataloging basic patterns in the atomic displacements is key to understanding and predicting the evolution of physical properties. However, the combinatorial complexity of the space of possible transitions coupled with the overwhelming amount of data being produced by highthroughput simulations make such an analysis extremely challenging and time-consuming for domain experts. The development of visual analytics systems that facilitate the exploration of simulation data is an active field of research. While these systems excel in identifying temporal regions of interest, they treat each timestep of a simulation as an independent event without considering the behavior of the atomic displacements between timesteps. We address this gap by introducing LAMDA, a visual analytics system that allows domain experts to quickly and systematically explore state-to-state transitions. In LAMDA, transitions are hierarchically categorized, providing a basis for cataloging displacement behavior, as well as enabling the analysis of simulations at different resolutions, ranging from very broad qualitative classes of transitions to very narrow definitions of unit processes. LAMDA supports navigating the hierarchy of transitions, enabling scientists to visualize the commonalities between different transitions in each class in terms of invariant features characterizing local atomic environments, and LAMDA simplifies the analysis by capturing user inputs through annotations. We evaluate our system through a case study and report on findings from our domain experts.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Small and Large Models for Crowd Simulation with Incomplete Trajectory Data. 不完全轨迹数据下人群仿真的协同大小模型。
IF 6.5 Pub Date : 2025-12-31 DOI: 10.1109/TVCG.2025.3649986
Zheng Wang, Chang Li, Hua Wang, Dong Chen, Shuo He, Yingcai Wu, Mingliang Xu

Crowd simulation plays a crucial role in various domains, including entertainment, urban planning, and safety assessment. Data-driven methods offer significant advantages in simulating natural and diverse crowd behaviors, enabling highly realistic simulations. However, existing methods often face challenges due to incomplete trajectory data and limited generalization to unfamiliar scenarios. To address these limitations, we propose a novel crowd simulation framework based on the collaboration of a small model and a large model. Inspired by the dual-process decision-making mechanism in cognitive psychology, this framework enables efficient handling of familiar scenarios while leveraging the reasoning capabilities of large models in complex or unfamiliar environments. The small model, responsible for generating fast and reactive behaviors, is trained on real-world incomplete trajectory data to learn movement patterns. The large model, which performs simulation correction to refine failed behaviors, leverages past successful and failed experiences to enhance behavior generation in complex scenarios. Experimental results demonstrate that our framework significantly improves simulation accuracy in the presence of missing trajectory segments and enhances cross-scene generalization.

人群模拟在娱乐、城市规划、安全评估等领域发挥着重要作用。数据驱动的方法在模拟自然和多样化的人群行为方面具有显著的优势,可以实现高度逼真的模拟。然而,由于轨迹数据不完整以及对不熟悉场景的泛化能力有限,现有的方法经常面临挑战。为了解决这些限制,我们提出了一种基于小模型和大模型协作的新型人群模拟框架。受认知心理学中的双过程决策机制的启发,该框架能够有效地处理熟悉的场景,同时利用大型模型在复杂或陌生环境中的推理能力。负责生成快速反应行为的小模型,在真实世界的不完整轨迹数据上进行训练,以学习运动模式。大型模型执行模拟校正以改进失败行为,利用过去的成功和失败经验来增强复杂场景中的行为生成。实验结果表明,该框架显著提高了缺失轨迹段的仿真精度,增强了跨场景泛化能力。
{"title":"Collaborative Small and Large Models for Crowd Simulation with Incomplete Trajectory Data.","authors":"Zheng Wang, Chang Li, Hua Wang, Dong Chen, Shuo He, Yingcai Wu, Mingliang Xu","doi":"10.1109/TVCG.2025.3649986","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3649986","url":null,"abstract":"<p><p>Crowd simulation plays a crucial role in various domains, including entertainment, urban planning, and safety assessment. Data-driven methods offer significant advantages in simulating natural and diverse crowd behaviors, enabling highly realistic simulations. However, existing methods often face challenges due to incomplete trajectory data and limited generalization to unfamiliar scenarios. To address these limitations, we propose a novel crowd simulation framework based on the collaboration of a small model and a large model. Inspired by the dual-process decision-making mechanism in cognitive psychology, this framework enables efficient handling of familiar scenarios while leveraging the reasoning capabilities of large models in complex or unfamiliar environments. The small model, responsible for generating fast and reactive behaviors, is trained on real-world incomplete trajectory data to learn movement patterns. The large model, which performs simulation correction to refine failed behaviors, leverages past successful and failed experiences to enhance behavior generation in complex scenarios. Experimental results demonstrate that our framework significantly improves simulation accuracy in the presence of missing trajectory segments and enhances cross-scene generalization.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1