首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
Here's What You Need to Know about My Data: Exploring Expert Knowledge's Role in Data Analysis. 以下是你需要知道的关于我的数据:探索专家知识在数据分析中的作用。
IF 6.5 Pub Date : 2025-12-10 DOI: 10.1109/TVCG.2025.3634821
Haihan Lin, Maxim Lisnic, Derya Akbaba, Miriah Meyer, Alexander Lex

Data-driven decision making has become a popular practice in science, industry, and public policy. Yet data alone, as an imperfect and partial representation of reality, is often insufficient to make good analysis decisions. Knowledge about the context of a dataset, its strengths and weaknesses, and its applicability for certain tasks is essential. Analysts are often not only familiar with the data itself, but also have data hunches about their analysis subject. In this work, we present an interview study with analysts from a wide range of domains and with varied expertise and experience, inquiring about the role of contextual knowledge. We provide insights into how data is insufficient in analysts' workflows and how they incorporate other sources of knowledge into their analysis. We analyzed how knowledge of data shaped their analysis outcome. Based on the results, we suggest design opportunities to better and more robustly consider both knowledge and data in analysis processes.

数据驱动的决策已经成为科学、工业和公共政策领域的一种流行做法。然而,仅凭数据作为现实的不完美和部分代表,往往不足以做出良好的分析决策。了解数据集的上下文、它的优缺点以及它对某些任务的适用性是必不可少的。分析师通常不仅熟悉数据本身,而且对其分析对象也有数据直觉。在这项工作中,我们对来自广泛领域、具有不同专业知识和经验的分析师进行了访谈研究,探讨了背景知识的作用。我们提供了分析人员工作流程中数据不足的原因以及他们如何将其他知识来源纳入分析的见解。我们分析了数据知识如何影响他们的分析结果。基于结果,我们建议在分析过程中更好、更稳健地考虑知识和数据的设计机会。
{"title":"Here's What You Need to Know about My Data: Exploring Expert Knowledge's Role in Data Analysis.","authors":"Haihan Lin, Maxim Lisnic, Derya Akbaba, Miriah Meyer, Alexander Lex","doi":"10.1109/TVCG.2025.3634821","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3634821","url":null,"abstract":"<p><p>Data-driven decision making has become a popular practice in science, industry, and public policy. Yet data alone, as an imperfect and partial representation of reality, is often insufficient to make good analysis decisions. Knowledge about the context of a dataset, its strengths and weaknesses, and its applicability for certain tasks is essential. Analysts are often not only familiar with the data itself, but also have data hunches about their analysis subject. In this work, we present an interview study with analysts from a wide range of domains and with varied expertise and experience, inquiring about the role of contextual knowledge. We provide insights into how data is insufficient in analysts' workflows and how they incorporate other sources of knowledge into their analysis. We analyzed how knowledge of data shaped their analysis outcome. Based on the results, we suggest design opportunities to better and more robustly consider both knowledge and data in analysis processes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OM4AnI: A Novel Overlap Measure for Anomaly Identification in Multi-Class Scatterplots. 一种新的多类散点图异常识别重叠度量。
IF 6.5 Pub Date : 2025-12-10 DOI: 10.1109/TVCG.2025.3642219
Liqun Liu, Leonid Bogachev, Mahdi Rezaei, Nishant Ravikumar, Arjun Khara, Mohsen Azarmi, Roy A Ruddle

Scatterplots are widely used across various domains to identify anomalies in datasets, particularly in multi-class settings, such as detecting misclassified or mislabeled data. However, scatterplot effectiveness often declines with large datasets due to limited display resolution. This paper introduces a novel Visual Quality Measure (VQM) - OM4AnI (Overlap Measure for Anomaly Identification) - which quantifies the degree of overlap for identifying anomalies, helping users estimate how effectively anomalies can be observed in multi-class scatterplots. OM4AnI begins by computing anomaly index based on each data point's position relative to its class cluster. The scatterplot is then discretized into a matrix representation by binning the display space into cell-level (pixel-level) grids and computing the coverage for each pixel. It takes into account the anomaly index of data points covering these pixels and visual features (marker shapes, marker sizes, and rendering orders). Building on this foundation, we sum all the coverage information in each cell (pixel) of matrix representation to obtain the final quality score with respect to anomaly identification. We conducted an evaluation to analyze the efficiency, effectiveness, sensitivity of OM4AnI in comparison with six representative baseline methods that are based on different computation granularity levels: data level, marker level, and pixel level. The results show that OM4AnI outperforms baseline methods by exhibiting more monotonic trends against the ground truth and greater sensitivity to rendering order, unlike the baseline methods. It confirms that OM4AnI can inform users about how effectively their scatterplots support anomaly identification. Overall, OM4AnI shows strong potential as an evaluation metric and for optimizing scatterplots through automatic adjustment of visual parameters.

散点图被广泛应用于各个领域,以识别数据集中的异常,特别是在多类设置中,例如检测错误分类或错误标记的数据。然而,由于有限的显示分辨率,散点图的有效性往往在大数据集上下降。本文介绍了一种新的视觉质量度量(VQM)——OM4AnI (Overlap Measure for Anomaly Identification),它量化了识别异常的重叠程度,帮助用户估计在多类散点图中如何有效地观察到异常。OM4AnI首先根据每个数据点相对于其类簇的位置计算异常指数。然后,通过将显示空间划分为单元级(像素级)网格并计算每个像素的覆盖率,将散点图离散为矩阵表示。它考虑了覆盖这些像素和视觉特征(标记形状、标记大小和呈现顺序)的数据点的异常指数。在此基础上,对矩阵表示的每个单元(像素)的覆盖信息进行求和,得到最终的异常识别质量分数。我们对OM4AnI的效率、有效性和灵敏度进行了评估,并与基于不同计算粒度级别(数据级、标记级和像素级)的六种代表性基线方法进行了比较。结果表明,与基线方法不同,OM4AnI表现出更多的单调趋势,对呈现顺序更敏感,从而优于基线方法。它证实了OM4AnI可以告知用户他们的散点图如何有效地支持异常识别。总体而言,OM4AnI显示出强大的潜力,可以作为评估指标,并通过自动调整视觉参数来优化散点图。
{"title":"OM4AnI: A Novel Overlap Measure for Anomaly Identification in Multi-Class Scatterplots.","authors":"Liqun Liu, Leonid Bogachev, Mahdi Rezaei, Nishant Ravikumar, Arjun Khara, Mohsen Azarmi, Roy A Ruddle","doi":"10.1109/TVCG.2025.3642219","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3642219","url":null,"abstract":"<p><p>Scatterplots are widely used across various domains to identify anomalies in datasets, particularly in multi-class settings, such as detecting misclassified or mislabeled data. However, scatterplot effectiveness often declines with large datasets due to limited display resolution. This paper introduces a novel Visual Quality Measure (VQM) - OM4AnI (Overlap Measure for Anomaly Identification) - which quantifies the degree of overlap for identifying anomalies, helping users estimate how effectively anomalies can be observed in multi-class scatterplots. OM4AnI begins by computing anomaly index based on each data point's position relative to its class cluster. The scatterplot is then discretized into a matrix representation by binning the display space into cell-level (pixel-level) grids and computing the coverage for each pixel. It takes into account the anomaly index of data points covering these pixels and visual features (marker shapes, marker sizes, and rendering orders). Building on this foundation, we sum all the coverage information in each cell (pixel) of matrix representation to obtain the final quality score with respect to anomaly identification. We conducted an evaluation to analyze the efficiency, effectiveness, sensitivity of OM4AnI in comparison with six representative baseline methods that are based on different computation granularity levels: data level, marker level, and pixel level. The results show that OM4AnI outperforms baseline methods by exhibiting more monotonic trends against the ground truth and greater sensitivity to rendering order, unlike the baseline methods. It confirms that OM4AnI can inform users about how effectively their scatterplots support anomaly identification. Overall, OM4AnI shows strong potential as an evaluation metric and for optimizing scatterplots through automatic adjustment of visual parameters.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixture of Cluster-guided Experts for Retrieval-Augmented Label Placement. 用于检索-增强标签放置的聚类引导专家的混合。
IF 6.5 Pub Date : 2025-12-10 DOI: 10.1109/TVCG.2025.3642518
Pingshun Zhang, Enyu Che, Yinan Chen, Bingyao Huang, Haibin Ling, Jingwei Qu

Text labels are widely used to convey auxiliary information in visualization and graphic design. The substantial variability in the categories and structures of labeled objects leads to diverse label layouts. Recent single-model learning-based solutions in label placement struggle to capture fine-grained differences between these layouts, which in turn limits their performance. In addition, although human designers often consult previous works to gain design insights, existing label layouts typically serve merely as training data, limiting the extent to which embedded design knowledge can be exploited. To address these challenges, we propose a mixture of cluster-guided experts (MoCE) solution for label placement. In this design, multiple experts jointly refine layout features, with each expert responsible for a specific cluster of layouts. A cluster-based gating function assigns input samples to experts based on representation clustering. We implement this idea through the Label Placement Cluster-guided Experts (LPCE) model, in which a MoCE layer integrates multiple feed-forward networks (FFNs), with each expert composed of a pair of FFNs. Furthermore, we introduce a retrieval augmentation strategy into LPCE, which retrieves and encodes reference layouts for each input sample to enrich its representations. Extensive experiments demonstrate that LPCE achieves superior performance in label placement, both quantitatively and qualitatively, surpassing a range of state-of-the-art baselines. Our algorithm is available at https://github.com/PingshunZhang/LPCE.

在可视化和平面设计中,文本标签被广泛用于传达辅助信息。标签对象的类别和结构的巨大可变性导致了不同的标签布局。最近基于单模型学习的标签放置解决方案难以捕捉这些布局之间的细粒度差异,这反过来限制了它们的性能。此外,虽然人类设计师经常查阅以前的作品来获得设计见解,但现有的标签布局通常只是作为训练数据,限制了嵌入式设计知识可以被利用的程度。为了应对这些挑战,我们提出了一种混合集群引导专家(MoCE)的标签放置解决方案。在这个设计中,多位专家共同提炼布局特征,每个专家负责一组特定的布局。基于聚类的门控函数基于表示聚类将输入样本分配给专家。我们通过标签放置聚类引导专家(Label Placement clustering -guided Experts, LPCE)模型来实现这一思想,在该模型中,MoCE层集成了多个前馈网络(ffn),每个专家由一对ffn组成。此外,我们在LPCE中引入了一种检索增强策略,该策略对每个输入样本的参考布局进行检索和编码,以丰富其表示。广泛的实验表明,LPCE在标签放置方面取得了卓越的性能,无论是定量还是定性,都超过了一系列最先进的基线。我们的算法可以在https://github.com/PingshunZhang/LPCE上找到。
{"title":"Mixture of Cluster-guided Experts for Retrieval-Augmented Label Placement.","authors":"Pingshun Zhang, Enyu Che, Yinan Chen, Bingyao Huang, Haibin Ling, Jingwei Qu","doi":"10.1109/TVCG.2025.3642518","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3642518","url":null,"abstract":"<p><p>Text labels are widely used to convey auxiliary information in visualization and graphic design. The substantial variability in the categories and structures of labeled objects leads to diverse label layouts. Recent single-model learning-based solutions in label placement struggle to capture fine-grained differences between these layouts, which in turn limits their performance. In addition, although human designers often consult previous works to gain design insights, existing label layouts typically serve merely as training data, limiting the extent to which embedded design knowledge can be exploited. To address these challenges, we propose a mixture of cluster-guided experts (MoCE) solution for label placement. In this design, multiple experts jointly refine layout features, with each expert responsible for a specific cluster of layouts. A cluster-based gating function assigns input samples to experts based on representation clustering. We implement this idea through the Label Placement Cluster-guided Experts (LPCE) model, in which a MoCE layer integrates multiple feed-forward networks (FFNs), with each expert composed of a pair of FFNs. Furthermore, we introduce a retrieval augmentation strategy into LPCE, which retrieves and encodes reference layouts for each input sample to enrich its representations. Extensive experiments demonstrate that LPCE achieves superior performance in label placement, both quantitatively and qualitatively, surpassing a range of state-of-the-art baselines. Our algorithm is available at https://github.com/PingshunZhang/LPCE.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiffPortraitVideo: Diffusion-based Expression-Consistent Zero-Shot Portrait Video Translation. DiffPortraitVideo:基于扩散的表达一致的零镜头人像视频翻译。
IF 6.5 Pub Date : 2025-12-10 DOI: 10.1109/TVCG.2025.3642300
Shaoxu Li, Chuhang Ma, Ye Pan

Zero-shot text-to-video diffusion models are crafted to expand pre-trained image diffusion models to the video domain without additional training. In recent times, prevailing techniques commonly rely on existing shapes as constraints and introduce inter-frame attention to ensure texture consistency. However, such shape constraints tend to restrict the stylized geometric deformation of videos and inadvertently neglect the original texture characteristics. Furthermore, existing methods suffer from flickering and inconsistent facial expressions. In this paper, we present DiffPortraitVideo. The framework employs a diffusion model-based feature and attention injection mechanism to generate key frames, with cross-frame constraints to enforce coherence and adaptive feature fusion to ensure expression consistency. Our approach achieves high spatio-temporal and expression consistency while retaining the textual and original image properties. Extensive and comprehensive experiments are conducted to validate the efficacy of our proposed framework in generating personalized, high-quality, and coherent videos. This not only showcases the superiority of our method over existing approaches but also paves the way for further research and development in the field of text-to-video generation with enhanced personalization and quality.

零射击文本到视频扩散模型是精心设计的,以扩展预训练的图像扩散模型到视频领域,而无需额外的训练。近年来,流行的技术通常依赖于现有的形状作为约束,并引入帧间注意来确保纹理的一致性。然而,这样的形状约束往往会限制视频的程式化几何变形,不经意间忽略了原有的纹理特征。此外,现有的方法存在面部表情闪烁和不一致的问题。在本文中,我们提出了DiffPortraitVideo。该框架采用基于扩散模型的特征和注意力注入机制生成关键帧,并采用跨帧约束增强一致性,自适应特征融合确保表达一致性。我们的方法在保留文本和原始图像属性的同时,实现了高度的时空和表达一致性。进行了广泛而全面的实验来验证我们提出的框架在生成个性化,高质量和连贯视频方面的有效性。这不仅展示了我们的方法相对于现有方法的优越性,而且为进一步研究和开发具有增强个性化和质量的文本到视频生成领域铺平了道路。
{"title":"DiffPortraitVideo: Diffusion-based Expression-Consistent Zero-Shot Portrait Video Translation.","authors":"Shaoxu Li, Chuhang Ma, Ye Pan","doi":"10.1109/TVCG.2025.3642300","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3642300","url":null,"abstract":"<p><p>Zero-shot text-to-video diffusion models are crafted to expand pre-trained image diffusion models to the video domain without additional training. In recent times, prevailing techniques commonly rely on existing shapes as constraints and introduce inter-frame attention to ensure texture consistency. However, such shape constraints tend to restrict the stylized geometric deformation of videos and inadvertently neglect the original texture characteristics. Furthermore, existing methods suffer from flickering and inconsistent facial expressions. In this paper, we present DiffPortraitVideo. The framework employs a diffusion model-based feature and attention injection mechanism to generate key frames, with cross-frame constraints to enforce coherence and adaptive feature fusion to ensure expression consistency. Our approach achieves high spatio-temporal and expression consistency while retaining the textual and original image properties. Extensive and comprehensive experiments are conducted to validate the efficacy of our proposed framework in generating personalized, high-quality, and coherent videos. This not only showcases the superiority of our method over existing approaches but also paves the way for further research and development in the field of text-to-video generation with enhanced personalization and quality.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Causality-based Visual Analytics of Sentiment Contagion in Social Media Topics. 基于因果关系的社交媒体话题情绪传染可视化分析。
IF 6.5 Pub Date : 2025-12-08 DOI: 10.1109/TVCG.2025.3633839
Renzhong Li, Shuainan Ye, Yuchen Lin, Buwei Zhou, Zhining Kang, Tai-Quan Peng, Wenhao Fu, Tan Tang, Yingcai Wu

Sentiment contagion occurs when attitudes toward one topic are influenced by attitudes toward others. Detecting and understanding this phenomenon is essential for analyzing topic evolution and informing social policies. Prior research has developed models to simulate the contagion process through hypothesis testing and has visualized user-topic correlations to aid comprehension. Nevertheless, the vast volume of topics and the complex interrelationships on social media present two key challenges: (1) efficient construction of large-scale sentiment contagion networks, and (2) in-depth explorations of these networks. To address these challenges, we introduce a causality-based framework that efficiently constructs and explains sentiment contagion. We further propose a map-like visualization technique that encodes time using a horizontal axis, enabling efficient visualization of causality-based sentiment flow while maintaining scalability through limitless spatial segmentation. Based on the visualization, we develop CausalMap, a system that supports analysts in tracing sentiment contagion pathways and assessing the influence of different demographic groups. Furthermore, we conduct comprehensive evaluations--including two use cases, a task-based user study, an expert interview, and an algorithm evaluation--to validate the usability and effectiveness of our approach.

当对一个话题的态度受到对其他话题的态度的影响时,就会发生情绪传染。发现和理解这一现象对于分析话题演变和为社会政策提供信息至关重要。先前的研究已经开发了模型,通过假设检验来模拟传染过程,并将用户-主题相关性可视化,以帮助理解。然而,社交媒体上庞大的话题量和复杂的相互关系提出了两个关键挑战:(1)有效构建大规模的情绪传染网络;(2)对这些网络进行深入探索。为了应对这些挑战,我们引入了一个基于因果关系的框架,该框架可以有效地构建和解释情绪传染。我们进一步提出了一种类似地图的可视化技术,该技术使用水平轴对时间进行编码,从而实现基于因果关系的情感流的有效可视化,同时通过无限的空间分割保持可扩展性。基于可视化,我们开发了CausalMap,这是一个支持分析师追踪情绪传染途径并评估不同人口群体影响的系统。此外,我们进行了全面的评估——包括两个用例、一个基于任务的用户研究、一个专家访谈和一个算法评估——来验证我们方法的可用性和有效性。
{"title":"Causality-based Visual Analytics of Sentiment Contagion in Social Media Topics.","authors":"Renzhong Li, Shuainan Ye, Yuchen Lin, Buwei Zhou, Zhining Kang, Tai-Quan Peng, Wenhao Fu, Tan Tang, Yingcai Wu","doi":"10.1109/TVCG.2025.3633839","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3633839","url":null,"abstract":"<p><p>Sentiment contagion occurs when attitudes toward one topic are influenced by attitudes toward others. Detecting and understanding this phenomenon is essential for analyzing topic evolution and informing social policies. Prior research has developed models to simulate the contagion process through hypothesis testing and has visualized user-topic correlations to aid comprehension. Nevertheless, the vast volume of topics and the complex interrelationships on social media present two key challenges: (1) efficient construction of large-scale sentiment contagion networks, and (2) in-depth explorations of these networks. To address these challenges, we introduce a causality-based framework that efficiently constructs and explains sentiment contagion. We further propose a map-like visualization technique that encodes time using a horizontal axis, enabling efficient visualization of causality-based sentiment flow while maintaining scalability through limitless spatial segmentation. Based on the visualization, we develop CausalMap, a system that supports analysts in tracing sentiment contagion pathways and assessing the influence of different demographic groups. Furthermore, we conduct comprehensive evaluations--including two use cases, a task-based user study, an expert interview, and an algorithm evaluation--to validate the usability and effectiveness of our approach.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepVIS: Bridging Natural Language and Data Visualization Through Step-wise Reasoning. DeepVIS:通过逐步推理架起自然语言和数据可视化的桥梁。
IF 6.5 Pub Date : 2025-12-08 DOI: 10.1109/TVCG.2025.3634645
Zhihao Shuai, Boyan Li, Siyu Yan, Yuyu Luo, Weikai Yang

Although data visualization is powerful for revealing patterns and communicating insights, creating effective visualizations requires familiarity with authoring tools and often disrupts the analysis flow. While large language models show promise for automatically converting analysis intent into visualizations, existing methods function as black boxes without transparent reasoning processes, which prevents users from understanding design rationales and refining suboptimal outputs. To bridge this gap, we propose integrating Chain-of-Thought (CoT) reasoning into the Natural Language to Visualization (NL2VIS) pipeline. First, we design a comprehensive CoT reasoning process for NL2VIS and develop an automatic pipeline to equip existing datasets with structured reasoning steps. Second, we introduce nvBench-CoT, a specialized dataset capturing detailed step-by-step reasoning from ambiguous natural language descriptions to finalized visualizations, which enables state-of-the-art performance when used for model fine-tuning. Third, we develop DeepVIS, an interactive visual interface that tightly integrates with the CoT reasoning process, allowing users to inspect reasoning steps, identify errors, and make targeted adjustments to improve visualization outcomes. Quantitative benchmark evaluations, two use cases, and a user study collectively demonstrate that our CoT framework effectively enhances NL2VIS quality while providing insightful reasoning steps to users.

尽管数据可视化对于揭示模式和交流见解非常强大,但是创建有效的可视化需要熟悉创作工具,并且经常会破坏分析流程。虽然大型语言模型显示了自动将分析意图转换为可视化的希望,但现有方法的功能就像没有透明推理过程的黑盒子,这阻碍了用户理解设计原理和精炼次优输出。为了弥补这一差距,我们建议将思维链(CoT)推理集成到自然语言可视化(NL2VIS)管道中。首先,我们为NL2VIS设计了一个全面的CoT推理流程,并开发了一个自动管道,为现有数据集配备结构化推理步骤。其次,我们介绍了nvBench-CoT,这是一个专门的数据集,可以捕获从模糊的自然语言描述到最终可视化的详细逐步推理,这在用于模型微调时可以实现最先进的性能。第三,我们开发了DeepVIS,这是一个与CoT推理过程紧密集成的交互式可视化界面,允许用户检查推理步骤,识别错误,并进行有针对性的调整以改善可视化结果。定量基准评估、两个用例和一个用户研究共同证明,我们的CoT框架有效地提高了NL2VIS的质量,同时为用户提供了深刻的推理步骤。
{"title":"DeepVIS: Bridging Natural Language and Data Visualization Through Step-wise Reasoning.","authors":"Zhihao Shuai, Boyan Li, Siyu Yan, Yuyu Luo, Weikai Yang","doi":"10.1109/TVCG.2025.3634645","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3634645","url":null,"abstract":"<p><p>Although data visualization is powerful for revealing patterns and communicating insights, creating effective visualizations requires familiarity with authoring tools and often disrupts the analysis flow. While large language models show promise for automatically converting analysis intent into visualizations, existing methods function as black boxes without transparent reasoning processes, which prevents users from understanding design rationales and refining suboptimal outputs. To bridge this gap, we propose integrating Chain-of-Thought (CoT) reasoning into the Natural Language to Visualization (NL2VIS) pipeline. First, we design a comprehensive CoT reasoning process for NL2VIS and develop an automatic pipeline to equip existing datasets with structured reasoning steps. Second, we introduce nvBench-CoT, a specialized dataset capturing detailed step-by-step reasoning from ambiguous natural language descriptions to finalized visualizations, which enables state-of-the-art performance when used for model fine-tuning. Third, we develop DeepVIS, an interactive visual interface that tightly integrates with the CoT reasoning process, allowing users to inspect reasoning steps, identify errors, and make targeted adjustments to improve visualization outcomes. Quantitative benchmark evaluations, two use cases, and a user study collectively demonstrate that our CoT framework effectively enhances NL2VIS quality while providing insightful reasoning steps to users.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Vision to Touch: Bridging Visual and Tactile Principles for Accessible Data Representation. 从视觉到触觉:为可访问数据表示架起视觉和触觉的桥梁。
IF 6.5 Pub Date : 2025-12-05 DOI: 10.1109/TVCG.2025.3634254
Kim Marriott, Matthew Butler, Leona Holloway, William Jolley, Bongshin Lee, Bruce Maguire, Danielle Albers Szafr

Tactile graphics are widely used to present maps and statistical diagrams to blind and low vision (BLV) people, with accessibility guidelines recommending their use for graphics where spatial relationships are important. Their use is expected to grow with the advent of commodity refreshable tactile displays. However, in stark contrast to visual information graphics, we lack a clear understanding of the benefts that well-designed tactile information graphics offer over text descriptions for BLV people. To address this gap, we introduce a framework considering the three components of encoding, perception and cognition to examine the known benefts for visual information graphics and explore their applicability to tactile information graphics. This work establishes a preliminary theoretical foundation for the tactile-frst design of information graphics and identifes future research avenues.

触觉图形被广泛用于向盲人和低视力(BLV)人群展示地图和统计图表,可访问性指南建议在空间关系重要的地方使用触觉图形。随着可更新的触觉显示器的出现,它们的使用预计将会增长。然而,与视觉信息图形形成鲜明对比的是,我们对设计良好的触觉信息图形比文本描述对BLV用户的好处缺乏清晰的认识。为了解决这一差距,我们引入了一个考虑编码、感知和认知三个组成部分的框架,以检验视觉信息图形的已知好处,并探讨它们在触觉信息图形中的适用性。本工作为触觉优先的信息图形设计奠定了初步的理论基础,并确定了未来的研究途径。
{"title":"From Vision to Touch: Bridging Visual and Tactile Principles for Accessible Data Representation.","authors":"Kim Marriott, Matthew Butler, Leona Holloway, William Jolley, Bongshin Lee, Bruce Maguire, Danielle Albers Szafr","doi":"10.1109/TVCG.2025.3634254","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3634254","url":null,"abstract":"<p><p>Tactile graphics are widely used to present maps and statistical diagrams to blind and low vision (BLV) people, with accessibility guidelines recommending their use for graphics where spatial relationships are important. Their use is expected to grow with the advent of commodity refreshable tactile displays. However, in stark contrast to visual information graphics, we lack a clear understanding of the benefts that well-designed tactile information graphics offer over text descriptions for BLV people. To address this gap, we introduce a framework considering the three components of encoding, perception and cognition to examine the known benefts for visual information graphics and explore their applicability to tactile information graphics. This work establishes a preliminary theoretical foundation for the tactile-frst design of information graphics and identifes future research avenues.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145688789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VizGenie: Toward Self-Refining, Domain-Aware Workflows for Next-Generation Scientific Visualization. VizGenie:面向下一代科学可视化的自精炼、领域感知工作流。
IF 6.5 Pub Date : 2025-12-05 DOI: 10.1109/TVCG.2025.3634655
Ayan Biswas, Terece L Turton, Nishath Rajiv Ranasinghe, Shawn Jones, Bradley Love, William Jones, Aric Hagberg, Han-Wei Shen, Nathan DeBardeleben, Earl Lawrence

We present VizGenie, a self-improving, agentic framework that advances scientific visualization through large language model (LLM) by orchestrating of a collection of domain-specific and dynamically generated modules. Users initially access core functionalities-such as threshold-based filtering, slice extraction, and statistical analysis-through pre-existing tools. For tasks beyond this baseline, VizGenie autonomously employs LLMs to generate new visualization scripts (e.g., VTK Python code), expanding its capabilities on-demand. Each generated script undergoes automated backend validation and is seamlessly integrated upon successful testing, continuously enhancing the system's adaptability and robustness. A distinctive feature of VizGenie is its intuitive natural language interface, allowing users to issue high-level feature-based queries (e.g., "visualize the skull" or "highlight tissue boundaries"). The system leverages image-based analysis and visual question answering (VQA) via fine-tuned vision models to interpret these queries precisely, bridging domain expertise and technical implementation. Additionally, users can interactively query generated visualizations through VQA, facilitating deeper exploration. Reliability and reproducibility are further strengthened by Retrieval-Augmented Generation (RAG), providing context-driven responses while maintaining comprehensive provenance records. Evaluations on complex volumetric datasets demonstrate significant reductions in cognitive overhead for iterative visualization tasks. By integrating curated domain-specific tools with LLM-driven flexibility, VizGenie not only accelerates insight generation but also establishes a sustainable, continuously evolving visualization practice. The resulting platform dynamically learns from user interactions, consistently enhancing support for feature-centric exploration and reproducible research in scientific visualization.

我们提出了VizGenie,一个自我改进的代理框架,通过编排一组特定领域和动态生成的模块,通过大型语言模型(LLM)推进科学可视化。用户最初通过预先存在的工具访问核心功能——比如基于阈值的过滤、切片提取和统计分析。对于超出此基线的任务,VizGenie自主使用llm来生成新的可视化脚本(例如,VTK Python代码),按需扩展其功能。每个生成的脚本都经过自动化的后端验证,并在成功的测试中无缝集成,不断增强系统的适应性和健壮性。VizGenie的一个显著特点是其直观的自然语言界面,允许用户发布基于高级特征的查询(例如,“可视化头骨”或“突出组织边界”)。该系统利用基于图像的分析和视觉问答(VQA),通过微调的视觉模型来精确地解释这些查询,桥接领域专业知识和技术实现。此外,用户可以通过VQA交互式地查询生成的可视化,从而促进更深入的探索。通过检索增强生成(RAG),提供上下文驱动的响应,同时保持全面的来源记录,进一步加强了可靠性和可重复性。对复杂体积数据集的评估表明,迭代可视化任务的认知开销显著降低。通过将特定领域的工具与法学硕士驱动的灵活性相结合,VizGenie不仅加速了洞察力的产生,而且建立了可持续的、不断发展的可视化实践。由此产生的平台从用户交互中动态学习,不断增强对科学可视化中以特征为中心的探索和可重复研究的支持。
{"title":"VizGenie: Toward Self-Refining, Domain-Aware Workflows for Next-Generation Scientific Visualization.","authors":"Ayan Biswas, Terece L Turton, Nishath Rajiv Ranasinghe, Shawn Jones, Bradley Love, William Jones, Aric Hagberg, Han-Wei Shen, Nathan DeBardeleben, Earl Lawrence","doi":"10.1109/TVCG.2025.3634655","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3634655","url":null,"abstract":"<p><p>We present VizGenie, a self-improving, agentic framework that advances scientific visualization through large language model (LLM) by orchestrating of a collection of domain-specific and dynamically generated modules. Users initially access core functionalities-such as threshold-based filtering, slice extraction, and statistical analysis-through pre-existing tools. For tasks beyond this baseline, VizGenie autonomously employs LLMs to generate new visualization scripts (e.g., VTK Python code), expanding its capabilities on-demand. Each generated script undergoes automated backend validation and is seamlessly integrated upon successful testing, continuously enhancing the system's adaptability and robustness. A distinctive feature of VizGenie is its intuitive natural language interface, allowing users to issue high-level feature-based queries (e.g., \"visualize the skull\" or \"highlight tissue boundaries\"). The system leverages image-based analysis and visual question answering (VQA) via fine-tuned vision models to interpret these queries precisely, bridging domain expertise and technical implementation. Additionally, users can interactively query generated visualizations through VQA, facilitating deeper exploration. Reliability and reproducibility are further strengthened by Retrieval-Augmented Generation (RAG), providing context-driven responses while maintaining comprehensive provenance records. Evaluations on complex volumetric datasets demonstrate significant reductions in cognitive overhead for iterative visualization tasks. By integrating curated domain-specific tools with LLM-driven flexibility, VizGenie not only accelerates insight generation but also establishes a sustainable, continuously evolving visualization practice. The resulting platform dynamically learns from user interactions, consistently enhancing support for feature-centric exploration and reproducible research in scientific visualization.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145688881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Your Model Is Unfair, Are You Even Aware? Inverse Relationship between Comprehension and Trust in Explainability Visualizations of Biased ML Models. 你的模式不公平,你知道吗?有偏差机器学习模型可解释性可视化中理解与信任的反比关系
IF 6.5 Pub Date : 2025-12-05 DOI: 10.1109/TVCG.2025.3634245
Zhanna Kaufman, Madeline Endres, Cindy Xiong Bearfield, Yuriy Brun

Systems relying on ML have become ubiquitous, but so has biased behavior within them. Research shows that bias significantly affects stakeholders' trust in systems and how they use them. Further, stakeholders of different backgrounds view and trust the same systems differently. Thus, how ML models' behavior is explained plays a key role in comprehension and trust. We survey explainability visualizations, creating a taxonomy of design characteristics. We conduct user studies to evaluate five state-of the-art visualization tools (LIME, SHAP, CP, Anchors, and ELI5) for model explainability, measuring how taxonomy characteristics affect comprehension, bias perception, and trust for non-expert ML users. Surprisingly, we find an inverse relationship between comprehension and trust: the better users understand the models, the less they trust them. We investigate the cause and find that this relationship is strongly mediated by bias perception: more comprehensible visualizations increase people's perception of bias, and increased bias perception reduces trust. We confirm this relationship is causal: Manipulating explainability visualizations to control comprehension, bias perception, and trust, we show that visualization design can significantly (p < 0.001) increase comprehension, increase perceived bias, and reduce trust. Conversely, reducing perceived model bias, either by improving model fairness or by adjusting visualization design, significantly increases trust even when comprehension remains high. Our work advances understanding of how comprehension affects trust and systematically investigates visualization's role in facilitating responsible ML applications.

依赖于机器学习的系统已经变得无处不在,但其中的偏见行为也是如此。研究表明,偏见会严重影响利益相关者对系统的信任以及他们如何使用系统。此外,不同背景的利益相关者对同一系统的看法和信任也不同。因此,如何解释ML模型的行为在理解和信任中起着关键作用。我们调查了可解释性可视化,创建了设计特征的分类。我们进行用户研究,以评估五种最先进的可视化工具(LIME, SHAP, CP, anchor和ELI5)的模型可解释性,测量分类法特征如何影响非专家ML用户的理解,偏见感知和信任。令人惊讶的是,我们发现了理解和信任之间的反比关系:用户对模型理解得越好,他们对模型的信任就越少。我们对原因进行了调查,发现这种关系在很大程度上是由偏见感知介导的:更容易理解的可视化增加了人们对偏见的感知,而偏见感知的增加降低了信任。我们证实了这种关系是因果关系:通过操纵可解释性可视化来控制理解、偏见感知和信任,我们发现可视化设计可以显著地(p < 0.001)增加理解、增加感知偏见和降低信任。相反,通过提高模型公平性或调整可视化设计来减少感知到的模型偏差,即使在理解程度仍然很高的情况下,也能显著增加信任。我们的工作促进了对理解如何影响信任的理解,并系统地研究了可视化在促进负责任的ML应用中的作用。
{"title":"Your Model Is Unfair, Are You Even Aware? Inverse Relationship between Comprehension and Trust in Explainability Visualizations of Biased ML Models.","authors":"Zhanna Kaufman, Madeline Endres, Cindy Xiong Bearfield, Yuriy Brun","doi":"10.1109/TVCG.2025.3634245","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3634245","url":null,"abstract":"<p><p>Systems relying on ML have become ubiquitous, but so has biased behavior within them. Research shows that bias significantly affects stakeholders' trust in systems and how they use them. Further, stakeholders of different backgrounds view and trust the same systems differently. Thus, how ML models' behavior is explained plays a key role in comprehension and trust. We survey explainability visualizations, creating a taxonomy of design characteristics. We conduct user studies to evaluate five state-of the-art visualization tools (LIME, SHAP, CP, Anchors, and ELI5) for model explainability, measuring how taxonomy characteristics affect comprehension, bias perception, and trust for non-expert ML users. Surprisingly, we find an inverse relationship between comprehension and trust: the better users understand the models, the less they trust them. We investigate the cause and find that this relationship is strongly mediated by bias perception: more comprehensible visualizations increase people's perception of bias, and increased bias perception reduces trust. We confirm this relationship is causal: Manipulating explainability visualizations to control comprehension, bias perception, and trust, we show that visualization design can significantly (p < 0.001) increase comprehension, increase perceived bias, and reduce trust. Conversely, reducing perceived model bias, either by improving model fairness or by adjusting visualization design, significantly increases trust even when comprehension remains high. Our work advances understanding of how comprehension affects trust and systematically investigates visualization's role in facilitating responsible ML applications.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145688822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the Research-Practice Gap in Visualization Design Guidelines. 理解可视化设计指南中的研究与实践差距。
IF 6.5 Pub Date : 2025-12-05 DOI: 10.1109/TVCG.2025.3640072
Nam Wook Kim, Grace Myers, Jinhan Choi, Yoonsuh Cho, Changhoon Oh, Yea-Seul Kim

Empirical research on perception and cognition has laid the foundation for visualization design, often distilled into practical guidelines intended to support effective chart creation. However, it remains unclear how well these research-driven insights are reflected in the guidelines practitioners actually use. In this paper, we investigate the research-practice gap in visualization design guidelines through a mixed-methods approach. We first collected design guidelines from practitioner-facing sources and empirical studies from academic venues to assess their alignment. To complement this analysis, we conducted surveys and interviews with practitioners and researchers to examine their experiences, perceptions, and challenges surrounding the development and use of design guidelines. Our findings reveal misalignment between empirical evidence and widely used guidelines, differing perspectives between communities, and key barriers that contribute to the persistence of the research-practice gap.

对感知和认知的实证研究为可视化设计奠定了基础,通常被提炼成旨在支持有效图表创建的实用指南。然而,目前尚不清楚这些研究驱动的见解在实践者实际使用的指导方针中反映得如何。在本文中,我们通过混合方法研究可视化设计指南的研究与实践差距。我们首先从面向从业者的资源和学术场所的实证研究中收集设计指南,以评估它们的一致性。为了补充这一分析,我们对从业人员和研究人员进行了调查和访谈,以检查他们在设计指南的开发和使用方面的经验、看法和挑战。我们的研究结果揭示了经验证据与广泛使用的指南之间的不一致,社区之间的不同观点以及导致研究与实践差距持续存在的关键障碍。
{"title":"Understanding the Research-Practice Gap in Visualization Design Guidelines.","authors":"Nam Wook Kim, Grace Myers, Jinhan Choi, Yoonsuh Cho, Changhoon Oh, Yea-Seul Kim","doi":"10.1109/TVCG.2025.3640072","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3640072","url":null,"abstract":"<p><p>Empirical research on perception and cognition has laid the foundation for visualization design, often distilled into practical guidelines intended to support effective chart creation. However, it remains unclear how well these research-driven insights are reflected in the guidelines practitioners actually use. In this paper, we investigate the research-practice gap in visualization design guidelines through a mixed-methods approach. We first collected design guidelines from practitioner-facing sources and empirical studies from academic venues to assess their alignment. To complement this analysis, we conducted surveys and interviews with practitioners and researchers to examine their experiences, perceptions, and challenges surrounding the development and use of design guidelines. Our findings reveal misalignment between empirical evidence and widely used guidelines, differing perspectives between communities, and key barriers that contribute to the persistence of the research-practice gap.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145688896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1