首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
Exploring Classifiers with Differentiable Decision Boundary Maps 利用可变决策边界图探索分类器
IF 2.5 4区 计算机科学 Q1 Computer Science Pub Date : 2024-06-10 DOI: 10.1111/cgf.15109
A. Machado, M. Behrisch, A. Telea

Explaining Machine Learning (ML) — and especially Deep Learning (DL) — classifiers' decisions is a subject of interest across fields due to the increasing ubiquity of such models in computing systems. As models get increasingly complex, relying on sophisticated machinery to recognize data patterns, explaining their behavior becomes more difficult. Directly visualizing classifier behavior is in general infeasible, as they create partitions of the data space, which is typically high dimensional. In recent years, Decision Boundary Maps (DBMs) have been developed, taking advantage of projection and inverse projection techniques. By being able to map 2D points back to the data space and subsequently run a classifier, DBMs represent a slice of classifier outputs. However, we recognize that DBMs without additional explanatory views are limited in their applicability. In this work, we propose augmenting the naive DBM generating process with views that provide more in-depth information about classifier behavior, such as whether the training procedure is locally stable. We describe our proposed views — which we term Differentiable Decision Boundary Maps — over a running example, explaining how our work enables drawing new and useful conclusions from these dense maps. We further demonstrate the value of these conclusions by showing how useful they would be in carrying out or preventing a dataset poisoning attack. We thus provide evidence of the ability of our proposed views to make DBMs significantly more trustworthy and interpretable, increasing their utility as a model understanding tool.

解释机器学习(ML)--尤其是深度学习(DL)--分类器的决策是各个领域都感兴趣的话题,因为这类模型在计算系统中越来越普遍。随着模型变得越来越复杂,依靠复杂的机器来识别数据模式,解释它们的行为变得越来越困难。直接可视化分类器的行为一般是不可行的,因为它们会创建数据空间的分区,而数据空间通常是高维的。近年来,利用投影和反投影技术,人们开发出了决策边界图(DBM)。通过将二维点映射回数据空间并随后运行分类器,DBM 代表了分类器输出的切片。然而,我们认识到,没有附加解释视图的 DBM 在适用性方面受到了限制。在这项工作中,我们建议在天真的 DBM 生成过程中增加一些视图,这些视图可以提供有关分类器行为的更深入的信息,例如训练过程是否局部稳定。我们在一个运行示例中描述了我们提出的视图(我们称之为可微分决策边界图),并解释了我们的工作是如何从这些密集的地图中得出新的有用结论的。我们进一步展示了这些结论在实施或防止数据集中毒攻击中的实用价值。因此,我们提供的证据表明,我们提出的观点能够使 DBM 的可信度和可解释性大大提高,从而增强其作为模型理解工具的实用性。
{"title":"Exploring Classifiers with Differentiable Decision Boundary Maps","authors":"A. Machado,&nbsp;M. Behrisch,&nbsp;A. Telea","doi":"10.1111/cgf.15109","DOIUrl":"https://doi.org/10.1111/cgf.15109","url":null,"abstract":"<div>\u0000 \u0000 <p>Explaining Machine Learning (ML) — and especially Deep Learning (DL) — classifiers' decisions is a subject of interest across fields due to the increasing ubiquity of such models in computing systems. As models get increasingly complex, relying on sophisticated machinery to recognize data patterns, explaining their behavior becomes more difficult. Directly visualizing classifier behavior is in general infeasible, as they create partitions of the data space, which is typically high dimensional. In recent years, Decision Boundary Maps (DBMs) have been developed, taking advantage of projection and inverse projection techniques. By being able to map 2D points back to the data space and subsequently run a classifier, DBMs represent a slice of classifier outputs. However, we recognize that DBMs without additional explanatory views are limited in their applicability. In this work, we propose augmenting the naive DBM generating process with views that provide more in-depth information about classifier behavior, such as whether the training procedure is locally stable. We describe our proposed views — which we term Differentiable Decision Boundary Maps — over a running example, explaining how our work enables drawing new and useful conclusions from these dense maps. We further demonstrate the value of these conclusions by showing how useful they would be in carrying out or preventing a dataset poisoning attack. We thus provide evidence of the ability of our proposed views to make DBMs significantly more trustworthy and interpretable, increasing their utility as a model understanding tool.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15109","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Analytics for Fine-grained Text Classification Models and Datasets 细粒度文本分类模型和数据集的可视化分析
IF 2.5 4区 计算机科学 Q1 Computer Science Pub Date : 2024-06-10 DOI: 10.1111/cgf.15098
M. Battogtokh, Y. Xing, C. Davidescu, A. Abdul-Rahman, M. Luck, R. Borgo

In natural language processing (NLP), text classification tasks are increasingly fine-grained, as datasets are fragmented into a larger number of classes that are more difficult to differentiate from one another. As a consequence, the semantic structures of datasets have become more complex, and model decisions more difficult to explain. Existing tools, suited for coarse-grained classification, falter under these additional challenges. In response to this gap, we worked closely with NLP domain experts in an iterative design-and-evaluation process to characterize and tackle the growing requirements in their workflow of developing fine-grained text classification models. The result of this collaboration is the development of SemLa, a novel Visual Analytics system tailored for 1) dissecting complex semantic structures in a dataset when it is spatialized in model embedding space, and 2) visualizing fine-grained nuances in the meaning of text samples to faithfully explain model reasoning. This paper details the iterative design study and the resulting innovations featured in SemLa. The final design allows contrastive analysis at different levels by unearthing lexical and conceptual patterns including biases and artifacts in data. Expert feedback on our final design and case studies confirm that SemLa is a useful tool for supporting model validation and debugging as well as data annotation.

在自然语言处理(NLP)领域,文本分类任务越来越细化,因为数据集被分割成更多的类别,而这些类别之间的区别更加困难。因此,数据集的语义结构变得更加复杂,模型决策也更加难以解释。适合粗粒度分类的现有工具在这些额外的挑战面前显得力不从心。针对这一差距,我们与 NLP 领域专家密切合作,通过迭代设计和评估过程,确定他们在开发细粒度文本分类模型的工作流程中不断增长的需求,并加以解决。这一合作的成果就是 SemLa 的开发,它是一种新颖的可视化分析系统,专门用于:1)当数据集在模型嵌入空间中空间化时,剖析数据集中的复杂语义结构;2)可视化文本样本含义中的细微差别,以忠实地解释模型推理。本文详细介绍了迭代设计研究和 SemLa 中的创新成果。最终的设计通过揭示词汇和概念模式,包括数据中的偏差和人工制品,实现了不同层次的对比分析。专家对我们最终设计和案例研究的反馈证实,SemLa 是支持模型验证和调试以及数据注释的有用工具。
{"title":"Visual Analytics for Fine-grained Text Classification Models and Datasets","authors":"M. Battogtokh,&nbsp;Y. Xing,&nbsp;C. Davidescu,&nbsp;A. Abdul-Rahman,&nbsp;M. Luck,&nbsp;R. Borgo","doi":"10.1111/cgf.15098","DOIUrl":"https://doi.org/10.1111/cgf.15098","url":null,"abstract":"<div>\u0000 \u0000 <p>In natural language processing (NLP), text classification tasks are increasingly fine-grained, as datasets are fragmented into a larger number of classes that are more difficult to differentiate from one another. As a consequence, the semantic structures of datasets have become more complex, and model decisions more difficult to explain. Existing tools, suited for coarse-grained classification, falter under these additional challenges. In response to this gap, we worked closely with NLP domain experts in an iterative design-and-evaluation process to characterize and tackle the growing requirements in their workflow of developing fine-grained text classification models. The result of this collaboration is the development of SemLa, a novel Visual Analytics system tailored for 1) dissecting complex semantic structures in a dataset when it is spatialized in model embedding space, and 2) visualizing fine-grained nuances in the meaning of text samples to faithfully explain model reasoning. This paper details the iterative design study and the resulting innovations featured in SemLa. The final design allows contrastive analysis at different levels by unearthing lexical and conceptual patterns including biases and artifacts in data. Expert feedback on our final design and case studies confirm that SemLa is a useful tool for supporting model validation and debugging as well as data annotation.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15098","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RouteVis: Quantitative Visual Analytics of Various Factors to Understand Route Choice Preferences RouteVis:对各种因素进行定量可视分析,了解路线选择偏好
IF 2.5 4区 计算机科学 Q1 Computer Science Pub Date : 2024-06-10 DOI: 10.1111/cgf.15091
C. Lv, H. Zhang, Y. Lin, J. Dong, L. Tian

Analyzing the preference of route choice not only facilitates the understanding of individuals' decision-making behavior, but also provides valuable information for improving traffic management strategies. As the layout of the road network, the variability of individual preferences and the spatial distribution of origins and destinations all play a role in route choice, it is a great challenge to reveal the interplay of such numerous complex factors. In this paper, we propose RouteVis, an interactive visual analytics system that enables traffic analysts to gain insight into what factors drive individuals to choose a specific route. To uncover the relationship between route choice and influencing factors, we design a quantitative analytical framework that supports analysts in conducting closed-loop analysis of various factors, i.e., data preprocessing, route identification, and the quantification of influence and contribution. Furthermore, given the multidimensional and spatio-temporal characteristics of the analysis results, we customize a set of coordinated views and visual designs to provide an intuitive presentation of the factors affecting people's travels, thus freeing analysts from tedious repetitive tasks and significantly enhancing work efficiency. Two typical usage scenarios and expert feedback on the system's functionality demonstrate that RouteVis can greatly enhance the capabilities of understanding the travel status.

分析路线选择的偏好不仅有助于了解个人的决策行为,还能为改进交通管理策略提供有价值的信息。由于道路网络的布局、个人偏好的可变性以及出发地和目的地的空间分布都会对路线选择产生影响,因此要揭示这些复杂因素之间的相互作用是一项巨大的挑战。在本文中,我们提出了交互式可视分析系统 RouteVis,使交通分析人员能够深入了解是哪些因素促使个人选择特定路线。为了揭示路线选择与影响因素之间的关系,我们设计了一个定量分析框架,支持分析人员对各种因素进行闭环分析,即数据预处理、路线识别以及影响和贡献的量化。此外,考虑到分析结果的多维性和时空性,我们还定制了一套协调的视图和可视化设计,直观地展示影响人们出行的各种因素,从而将分析人员从繁琐的重复性工作中解脱出来,大大提高了工作效率。两个典型的使用场景和专家对系统功能的反馈表明,RouteVis 可以大大提高人们了解出行状况的能力。
{"title":"RouteVis: Quantitative Visual Analytics of Various Factors to Understand Route Choice Preferences","authors":"C. Lv,&nbsp;H. Zhang,&nbsp;Y. Lin,&nbsp;J. Dong,&nbsp;L. Tian","doi":"10.1111/cgf.15091","DOIUrl":"https://doi.org/10.1111/cgf.15091","url":null,"abstract":"<p>Analyzing the preference of route choice not only facilitates the understanding of individuals' decision-making behavior, but also provides valuable information for improving traffic management strategies. As the layout of the road network, the variability of individual preferences and the spatial distribution of origins and destinations all play a role in route choice, it is a great challenge to reveal the interplay of such numerous complex factors. In this paper, we propose RouteVis, an interactive visual analytics system that enables traffic analysts to gain insight into what factors drive individuals to choose a specific route. To uncover the relationship between route choice and influencing factors, we design a quantitative analytical framework that supports analysts in conducting closed-loop analysis of various factors, i.e., data preprocessing, route identification, and the quantification of influence and contribution. Furthermore, given the multidimensional and spatio-temporal characteristics of the analysis results, we customize a set of coordinated views and visual designs to provide an intuitive presentation of the factors affecting people's travels, thus freeing analysts from tedious repetitive tasks and significantly enhancing work efficiency. Two typical usage scenarios and expert feedback on the system's functionality demonstrate that RouteVis can greatly enhance the capabilities of understanding the travel status.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Temporal Treemaps by Minimizing Crossings 通过最小化交叉改进时序树图
IF 2.5 4区 计算机科学 Q1 Computer Science Pub Date : 2024-06-10 DOI: 10.1111/cgf.15087
Alexander Dobler, Martin Nöllenburg

Temporal trees are trees that evolve over a discrete set of time steps. Each time step is associated with a node-weighted rooted tree and consecutive trees change by adding new nodes, removing nodes, splitting nodes, merging nodes, and changing node weights. Recently, two-dimensional visualizations of temporal trees called temporal treemaps have been proposed, representing the temporal dimension on the x-axis, and visualizing the tree modifications over time as temporal edges of varying thickness. The tree hierarchy at each time step is depicted as a vertical, one-dimensional nesting relationships, similarly to standard, non-temporal treemaps. Naturally, temporal edges can cross in the visualization, decreasing readability. Heuristics were proposed to minimize such crossings in the literature, but a formal characterization and minimization of crossings in temporal treemaps was left open. In this paper, we propose two variants of defining crossings in temporal treemaps that can be combinatorially characterized. For each variant, we propose an exact optimization algorithm based on integer linear programming and heuristics based on graph drawing techniques. In an extensive experimental evaluation, we show that on the one hand the exact algorithms reduce the number of crossings by a factor of 20 on average compared to the previous algorithms. On the other hand, our new heuristics are faster by a factor of more than 100 and still reduce the number of crossings by a factor of almost three.

时间树是指在一组离散的时间步长内演化的树。每个时间步都与一棵节点加权的有根树相关联,连续的树会通过添加新节点、删除节点、拆分节点、合并节点和改变节点权重等方式发生变化。最近,有人提出了时序树的二维可视化方法,称为时序树图,在 x 轴上表示时序维度,并将树随时间的变化可视化为粗细不同的时序边。与标准的非时态树形图类似,每个时间步的树层次结构被描述为垂直的一维嵌套关系。自然,时间边缘会在可视化中交叉,从而降低可读性。文献中提出了启发式方法来尽量减少这种交叉,但对时序树状图中交叉的正式表征和最小化还没有定论。在本文中,我们提出了定义时空树状图中交叉点的两种变体,这些变体可以组合表征。对于每种变体,我们都提出了基于整数线性规划的精确优化算法和基于图绘制技术的启发式算法。在广泛的实验评估中,我们发现一方面,精确算法比以前的算法平均减少了 20 倍的交叉数量。另一方面,我们的新启发式算法速度快了 100 多倍,但交叉数量仍然减少了近 3 倍。
{"title":"Improving Temporal Treemaps by Minimizing Crossings","authors":"Alexander Dobler,&nbsp;Martin Nöllenburg","doi":"10.1111/cgf.15087","DOIUrl":"https://doi.org/10.1111/cgf.15087","url":null,"abstract":"<div>\u0000 \u0000 <p>Temporal trees are trees that evolve over a discrete set of time steps. Each time step is associated with a node-weighted rooted tree and consecutive trees change by adding new nodes, removing nodes, splitting nodes, merging nodes, and changing node weights. Recently, two-dimensional visualizations of temporal trees called <i>temporal treemaps</i> have been proposed, representing the temporal dimension on the x-axis, and visualizing the tree modifications over time as temporal edges of varying thickness. The tree hierarchy at each time step is depicted as a vertical, one-dimensional nesting relationships, similarly to standard, non-temporal treemaps. Naturally, temporal edges can cross in the visualization, decreasing readability. Heuristics were proposed to minimize such crossings in the literature, but a formal characterization and minimization of crossings in temporal treemaps was left open. In this paper, we propose two variants of defining crossings in temporal treemaps that can be combinatorially characterized. For each variant, we propose an exact optimization algorithm based on integer linear programming and heuristics based on graph drawing techniques. In an extensive experimental evaluation, we show that on the one hand the exact algorithms reduce the number of crossings by a factor of <i>20</i> on average compared to the previous algorithms. On the other hand, our new heuristics are faster by a factor of more than <i>100</i> and still reduce the number of crossings by a factor of almost three.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15087","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Antarstick: Extracting Snow Height From Time-Lapse Photography Antarstick:从延时摄影中提取积雪高度
IF 2.5 4区 计算机科学 Q1 Computer Science Pub Date : 2024-06-10 DOI: 10.1111/cgf.15088
Matěj Lang, Radoslav Mráz, Marek Trtík, Sergej Stoppel, Jan Byška, Barbora Kozlíková

The evolution and accumulation of snow cover are among the most important characteristics influencing Antarctica's climate and biotopes. The changes in Antarctica are also substantially impacting global climate change. Therefore, detailed monitoring of snow evolution is key to understanding such changes. One way to conduct this monitoring is by installing trail cameras in a particular region and then processing the captured information. This option is affordable, but has some drawbacks, such as the fully automatic solution for the extraction of snow height from these images is not feasible. Therefore, it still requires human intervention, manually correcting the inaccurately extracted information. In this paper, we present Antarstick, a tool for visual guidance of the user to potentially wrong values extracted from poor-quality images and support for their interactive correction. This tool allows for much quicker and semi-automated processing of snow height from time-lapse photography.

雪盖的演变和积累是影响南极洲气候和生物群落的最重要特征之一。南极洲的变化也对全球气候变化产生重大影响。因此,对积雪演变的详细监测是了解这些变化的关键。进行监测的一种方法是在特定区域安装跟踪摄像机,然后处理捕捉到的信息。这种方法经济实惠,但也有一些缺点,例如从这些图像中提取雪高度的全自动解决方案并不可行。因此,仍然需要人工干预,手动修正不准确的提取信息。在本文中,我们介绍了 Antarstick,这是一种可视化指导工具,可帮助用户识别从劣质图像中提取的潜在错误值,并支持对其进行交互式修正。该工具可以更快地对延时摄影中的雪高进行半自动化处理。
{"title":"Antarstick: Extracting Snow Height From Time-Lapse Photography","authors":"Matěj Lang,&nbsp;Radoslav Mráz,&nbsp;Marek Trtík,&nbsp;Sergej Stoppel,&nbsp;Jan Byška,&nbsp;Barbora Kozlíková","doi":"10.1111/cgf.15088","DOIUrl":"https://doi.org/10.1111/cgf.15088","url":null,"abstract":"<div>\u0000 \u0000 <p>The evolution and accumulation of snow cover are among the most important characteristics influencing Antarctica's climate and biotopes. The changes in Antarctica are also substantially impacting global climate change. Therefore, detailed monitoring of snow evolution is key to understanding such changes. One way to conduct this monitoring is by installing trail cameras in a particular region and then processing the captured information. This option is affordable, but has some drawbacks, such as the fully automatic solution for the extraction of snow height from these images is not feasible. Therefore, it still requires human intervention, manually correcting the inaccurately extracted information. In this paper, we present Antarstick, a tool for visual guidance of the user to potentially wrong values extracted from poor-quality images and support for their interactive correction. This tool allows for much quicker and semi-automated processing of snow height from time-lapse photography.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15088","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AVA: Towards Autonomous Visualization Agents through Visual Perception-Driven Decision-Making AVA:通过视觉感知驱动决策实现自主可视化代理
IF 2.5 4区 计算机科学 Q1 Computer Science Pub Date : 2024-06-10 DOI: 10.1111/cgf.15093
S. Liu, H. Miao, Z. Li, M. Olson, V. Pascucci, P-T. Bremer

With recent advances in multi-modal foundation models, the previously text-only large language models (LLM) have evolved to incorporate visual input, opening up unprecedented opportunities for various applications in visualization. Compared to existing work on LLM-based visualization works that generate and control visualization with textual input and output only, the proposed approach explores the utilization of the visual processing ability of multi-modal LLMs to develop Autonomous Visualization Agents (AVAs) that can evaluate the generated visualization and iterate on the result to accomplish user-defined objectives defined through natural language. We propose the first framework for the design of AVAs and present several usage scenarios intended to demonstrate the general applicability of the proposed paradigm. Our preliminary exploration and proof-of-concept agents suggest that this approach can be widely applicable whenever the choices of appropriate visualization parameters require the interpretation of previous visual output. Our study indicates that AVAs represent a general paradigm for designing intelligent visualization systems that can achieve high-level visualization goals, which pave the way for developing expert-level visualization agents in the future.

随着多模态基础模型的最新进展,以前纯文本的大型语言模型(LLM)已经发展到可以结合视觉输入,为可视化领域的各种应用带来了前所未有的机遇。与现有的基于 LLM 的可视化作品仅通过文本输入和输出来生成和控制可视化相比,我们提出的方法探索了如何利用多模态 LLM 的视觉处理能力来开发自主可视化代理(AVAs),它可以评估生成的可视化并迭代结果,以完成用户通过自然语言定义的目标。我们提出了第一个 AVA 设计框架,并介绍了几个使用场景,旨在展示所提范例的普遍适用性。我们的初步探索和概念验证代理表明,只要选择适当的可视化参数需要解释先前的可视化输出,这种方法就可以广泛应用。我们的研究表明,AVA代表了一种设计智能可视化系统的通用范式,可以实现高级可视化目标,这为未来开发专家级可视化代理铺平了道路。
{"title":"AVA: Towards Autonomous Visualization Agents through Visual Perception-Driven Decision-Making","authors":"S. Liu,&nbsp;H. Miao,&nbsp;Z. Li,&nbsp;M. Olson,&nbsp;V. Pascucci,&nbsp;P-T. Bremer","doi":"10.1111/cgf.15093","DOIUrl":"https://doi.org/10.1111/cgf.15093","url":null,"abstract":"<p>With recent advances in multi-modal foundation models, the previously text-only large language models (LLM) have evolved to incorporate visual input, opening up unprecedented opportunities for various applications in visualization. Compared to existing work on LLM-based visualization works that generate and control visualization with textual input and output only, the proposed approach explores the utilization of the visual processing ability of multi-modal LLMs to develop Autonomous Visualization Agents (AVAs) that can evaluate the generated visualization and iterate on the result to accomplish user-defined objectives defined through natural language. We propose the first framework for the design of AVAs and present several usage scenarios intended to demonstrate the general applicability of the proposed paradigm. Our preliminary exploration and proof-of-concept agents suggest that this approach can be widely applicable whenever the choices of appropriate visualization parameters require the interpretation of previous visual output. Our study indicates that AVAs represent a general paradigm for designing intelligent visualization systems that can achieve high-level visualization goals, which pave the way for developing expert-level visualization agents in the future.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open Your Ears and Take a Look: A State-of-the-Art Report on the Integration of Sonification and Visualization 张开耳朵看一看:关于声像化与可视化融合的最新报告
IF 2.5 4区 计算机科学 Q1 Computer Science Pub Date : 2024-06-10 DOI: 10.1111/cgf.15114
K. Enge, E. Elmquist, V. Caiola, N. Rönnberg, A. Rind, M. Iber, S. Lenzi, F. Lan, R. Höldrich, W. Aigner

The research communities studying visualization and sonification for data display and analysis share exceptionally similar goals, essentially making data of any kind interpretable to humans. One community does so by using visual representations of data, and the other community employs auditory (non-speech) representations of data. While the two communities have a lot in common, they developed mostly in parallel over the course of the last few decades. With this STAR, we discuss a collection of work that bridges the borders of the two communities, hence a collection of work that aims to integrate the two techniques into one form of audiovisual display, which we argue to be “more than the sum of the two.” We introduce and motivate a classification system applicable to such audiovisual displays and categorize a corpus of 57 academic publications that appeared between 2011 and 2023 in categories such as reading level, dataset type, or evaluation system, to mention a few. The corpus also enables a meta-analysis of the field, including regularly occurring design patterns such as type of visualization and sonification techniques, or the use of visual and auditory channels, showing an overall diverse field with different designs. An analysis of a co-author network of the field shows individual teams without many interconnections. The body of work covered in this STAR also relates to three adjacent topics: audiovisual monitoring, accessibility, and audiovisual data art. These three topics are discussed individually in addition to the systematically conducted part of this research. The findings of this report may be used by researchers from both fields to understand the potentials and challenges of such integrated designs while hopefully inspiring them to collaborate with experts from the respective other field.

研究用于数据显示和分析的可视化和声音化的研究团体有着极其相似的目标,基本上都是为了让人类能够解释任何类型的数据。其中一个研究团体通过使用数据的可视化表示来实现这一目标,而另一个研究团体则使用数据的听觉(非语音)表示来实现这一目标。虽然这两个群体有很多共同点,但它们在过去几十年中大多是平行发展的。在本 STAR 中,我们将讨论一系列工作,这些工作将这两个群体的边界连接起来,因此,我们将讨论一系列工作,这些工作旨在将这两种技术整合为一种视听显示形式,我们认为这种形式 "大于两者之和"。我们介绍并提出了适用于此类视听展示的分类系统,并将 2011 年至 2023 年间发表的 57 篇学术论文按阅读水平、数据集类型或评估系统等类别进行分类。通过该语料库还可以对该领域进行元分析,包括经常出现的设计模式,如可视化和声化技术的类型,或视觉和听觉通道的使用,从而显示出一个具有不同设计的多样化领域。对该领域合著者网络的分析表明,单个团队之间没有太多的相互联系。本 STAR 所涉及的工作还与三个相邻主题有关:视听监控、无障碍和视听数据艺术。除了本研究中系统开展的部分外,我们还将对这三个主题进行单独讨论。来自这两个领域的研究人员可利用本报告的研究成果来了解此类集成设计的潜力和挑战,同时希望能激励他们与其他领域的专家开展合作。
{"title":"Open Your Ears and Take a Look: A State-of-the-Art Report on the Integration of Sonification and Visualization","authors":"K. Enge,&nbsp;E. Elmquist,&nbsp;V. Caiola,&nbsp;N. Rönnberg,&nbsp;A. Rind,&nbsp;M. Iber,&nbsp;S. Lenzi,&nbsp;F. Lan,&nbsp;R. Höldrich,&nbsp;W. Aigner","doi":"10.1111/cgf.15114","DOIUrl":"https://doi.org/10.1111/cgf.15114","url":null,"abstract":"<div>\u0000 \u0000 <p>The research communities studying visualization and sonification for data display and analysis share exceptionally similar goals, essentially making data of any kind interpretable to humans. One community does so by using visual representations of data, and the other community employs auditory (non-speech) representations of data. While the two communities have a lot in common, they developed mostly in parallel over the course of the last few decades. With this STAR, we discuss a collection of work that bridges the borders of the two communities, hence a collection of work that aims to integrate the two techniques into one form of audiovisual display, which we argue to be “more than the sum of the two.” We introduce and motivate a classification system applicable to such audiovisual displays and categorize a corpus of 57 academic publications that appeared between 2011 and 2023 in categories such as reading level, dataset type, or evaluation system, to mention a few. The corpus also enables a meta-analysis of the field, including regularly occurring design patterns such as type of visualization and sonification techniques, or the use of visual and auditory channels, showing an overall diverse field with different designs. An analysis of a co-author network of the field shows individual teams without many interconnections. The body of work covered in this STAR also relates to three adjacent topics: audiovisual monitoring, accessibility, and audiovisual data art. These three topics are discussed individually in addition to the systematically conducted part of this research. The findings of this report may be used by researchers from both fields to understand the potentials and challenges of such integrated designs while hopefully inspiring them to collaborate with experts from the respective other field.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AutoVizuA11y: A Tool to Automate Screen Reader Accessibility in Charts AutoVizuA11y:在图表中自动实现屏幕阅读器无障碍的工具
IF 2.5 4区 计算机科学 Q1 Computer Science Pub Date : 2024-06-10 DOI: 10.1111/cgf.15099
Diogo Duarte, Rita Costa, Pedro Bizarro, Carlos Duarte

Charts remain widely inaccessible on the web for users of assistive technologies like screen readers. This is, in part, due to data visualization experts still lacking the experience, knowledge, and time to consistently implement accessible charts. As a result, screen reader users are prevented from accessing information and are forced to resort to tabular alternatives (if available), limiting the insights that they can gather. We worked with both groups to develop AutoVizuA11y, a tool that automates the addition of accessible features to web-based charts. It generates human-like descriptions of the data using a large language model, calculates statistical insights from the data, and provides keyboard navigation between multiple charts and underlying elements. Fifteen screen reader users interacted with charts made accessible with AutoVizuA11y in a usability test, thirteen of which praised the tool for its intuitive design, short learning curve, and rich information. On average, they took 66 seconds to complete each of the eight analytical tasks presented and achieved a success rate of 89%. Through a SUS questionnaire, the participants gave AutoVizuA11y an “Excellent” score — 83.5/100 points. We also gathered feedback from two data visualization experts who used the tool. They praised the tool availability, ease of use and functionalities, and provided feedback to add AutoVizuA11y support for other technologies in the future.

在网络上,使用屏幕阅读器等辅助技术的用户仍然普遍无法使用图表。部分原因是数据可视化专家仍然缺乏经验、知识和时间来持续实施无障碍图表。因此,屏幕阅读器用户无法获取信息,只能使用表格(如果有的话),从而限制了他们所能收集到的信息。我们与这两个小组合作开发了 AutoVizuA11y,这是一款可自动为网络图表添加无障碍功能的工具。它使用大型语言模型生成类似于人类的数据描述,计算数据中的统计信息,并在多个图表和底层元素之间提供键盘导航。在一次可用性测试中,15 位屏幕阅读器用户与 AutoVizuA11y 可访问的图表进行了互动,其中 13 位用户称赞该工具设计直观、学习曲线短、信息丰富。他们平均用 66 秒完成了八项分析任务中的每一项,成功率高达 89%。通过 SUS 问卷调查,参与者给 AutoVizuA11y 打出了 "优秀 "的分数--83.5/100 分。我们还收集了两位使用过该工具的数据可视化专家的反馈意见。他们对工具的可用性、易用性和功能大加赞赏,并提出了在未来增加 AutoVizuA11y 对其他技术的支持的反馈意见。
{"title":"AutoVizuA11y: A Tool to Automate Screen Reader Accessibility in Charts","authors":"Diogo Duarte,&nbsp;Rita Costa,&nbsp;Pedro Bizarro,&nbsp;Carlos Duarte","doi":"10.1111/cgf.15099","DOIUrl":"https://doi.org/10.1111/cgf.15099","url":null,"abstract":"<div>\u0000 \u0000 <p>Charts remain widely inaccessible on the web for users of assistive technologies like screen readers. This is, in part, due to data visualization experts still lacking the experience, knowledge, and time to consistently implement accessible charts. As a result, screen reader users are prevented from accessing information and are forced to resort to tabular alternatives (if available), limiting the insights that they can gather. We worked with both groups to develop AutoVizuA11y, a tool that automates the addition of accessible features to web-based charts. It generates human-like descriptions of the data using a large language model, calculates statistical insights from the data, and provides keyboard navigation between multiple charts and underlying elements. Fifteen screen reader users interacted with charts made accessible with AutoVizuA11y in a usability test, thirteen of which praised the tool for its intuitive design, short learning curve, and rich information. On average, they took 66 seconds to complete each of the eight analytical tasks presented and achieved a success rate of 89%. Through a SUS questionnaire, the participants gave AutoVizuA11y an “Excellent” score — 83.5/100 points. We also gathered feedback from two data visualization experts who used the tool. They praised the tool availability, ease of use and functionalities, and provided feedback to add AutoVizuA11y support for other technologies in the future.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15099","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Topological Characterization and Uncertainty Visualization of Atmospheric Rivers 大气河流的拓扑特征和不确定性可视化
IF 2.5 4区 计算机科学 Q1 Computer Science Pub Date : 2024-06-10 DOI: 10.1111/cgf.15084
Fangfei Lan, Brandi Gamelin, Lin Yan, Jiali Wang, Bei Wang, Hanqi Guo

Atmospheric rivers (ARs) are long, narrow regions of water vapor in the Earth's atmosphere that transport heat and moisture from the tropics to the mid-latitudes. ARs are often associated with extreme weather events in North America and contribute significantly to water supply and flood risk. However, characterizing ARs has been a major challenge due to the lack of a universal definition and their structural variations. Existing AR detection tools (ARDTs) produce distinct AR boundaries for the same event, making the risk assessment of ARs a difficult task. Understanding these uncertainties is crucial to improving the predictability of AR impacts, including their landfall areas and associated precipitation, which could cause catastrophic flooding and landslides over the coastal regions. In this work, we develop an uncertainty visualization framework that captures boundary and interior uncertainties, i.e., structural variations, of an ensemble of ARs that arise from a set of ARDTs. We first provide a statistical overview of the AR boundaries using the contour boxplots of Whitaker et al. that highlight the structural variations of AR boundaries based on their nesting relationships. We then introduce the topological skeletons of ARs based on Morse complexes that characterize the interior variation of an ensemble of ARs. We propose an uncertainty visualization of these topological skeletons, inspired by MetroSets of Jacobson et al. that emphasizes the agreements and disagreements across the ensemble members. Through case studies and expert feedback, we demonstrate that the two approaches complement each other, and together they could facilitate an effective comparative analysis process and provide a more confident outlook on an AR's shape, area, and onshore impact.

大气河流(ARs)是地球大气中水汽形成的狭长区域,将热量和湿气从热带地区输送到中纬度地区。大气河流通常与北美洲的极端天气事件有关,对供水和洪水风险有重大影响。然而,由于缺乏统一的定义及其结构上的变化,描述 AR 的特征一直是一项重大挑战。现有的 AR 检测工具(ARDTs)会对同一事件产生不同的 AR 边界,从而使 AR 风险评估成为一项艰巨的任务。了解这些不确定性对提高 AR 影响的可预测性至关重要,包括其登陆区域和相关降水,这可能会在沿海地区造成灾难性的洪水和山体滑坡。在这项工作中,我们开发了一个不确定性可视化框架,它可以捕捉到由一组 ARDTs 产生的 ARs 集合的边界和内部不确定性,即结构变化。首先,我们使用 Whitaker 等人的等高线方框图提供了 AR 边界的统计概览,根据嵌套关系突出显示了 AR 边界的结构变化。然后,我们介绍了基于莫尔斯复合体的 AR 拓扑骨架,它描述了 AR 集合的内部变化特征。受雅各布森等人的 MetroSets 的启发,我们提出了这些拓扑骨架的不确定性可视化方法,它强调了集合成员之间的一致和分歧。通过案例研究和专家反馈,我们证明这两种方法可以相互补充,共同促进有效的比较分析过程,并为 AR 的形状、面积和陆上影响提供更有把握的前景。
{"title":"Topological Characterization and Uncertainty Visualization of Atmospheric Rivers","authors":"Fangfei Lan,&nbsp;Brandi Gamelin,&nbsp;Lin Yan,&nbsp;Jiali Wang,&nbsp;Bei Wang,&nbsp;Hanqi Guo","doi":"10.1111/cgf.15084","DOIUrl":"https://doi.org/10.1111/cgf.15084","url":null,"abstract":"<div>\u0000 \u0000 <p>Atmospheric rivers (ARs) are long, narrow regions of water vapor in the Earth's atmosphere that transport heat and moisture from the tropics to the mid-latitudes. ARs are often associated with extreme weather events in North America and contribute significantly to water supply and flood risk. However, characterizing ARs has been a major challenge due to the lack of a universal definition and their structural variations. Existing AR detection tools (ARDTs) produce distinct AR boundaries for the same event, making the risk assessment of ARs a difficult task. Understanding these uncertainties is crucial to improving the predictability of AR impacts, including their landfall areas and associated precipitation, which could cause catastrophic flooding and landslides over the coastal regions. In this work, we develop an uncertainty visualization framework that captures boundary and interior uncertainties, i.e., structural variations, of an ensemble of ARs that arise from a set of ARDTs. We first provide a statistical overview of the AR boundaries using the contour boxplots of Whitaker et al. that highlight the structural variations of AR boundaries based on their nesting relationships. We then introduce the topological skeletons of ARs based on Morse complexes that characterize the interior variation of an ensemble of ARs. We propose an uncertainty visualization of these topological skeletons, inspired by MetroSets of Jacobson et al. that emphasizes the agreements and disagreements across the ensemble members. Through case studies and expert feedback, we demonstrate that the two approaches complement each other, and together they could facilitate an effective comparative analysis process and provide a more confident outlook on an AR's shape, area, and onshore impact.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15084","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Systematic Literature Review of User Evaluation in Immersive Analytics 关于沉浸式分析中用户评估的系统性文献综述
IF 2.5 4区 计算机科学 Q1 Computer Science Pub Date : 2024-06-10 DOI: 10.1111/cgf.15111
J. Friedl-Knirsch, F. Pointecker, S. Pfistermüller, C. Stach, C. Anthes, D. Roth

User evaluation is a common and useful tool for systematically generating knowledge and validating novel approaches in the domain of Immersive Analytics. Since this research domain centres around users, user evaluation is of extraordinary relevance. Additionally, Immersive Analytics is an interdisciplinary field of research where different communities bring in their own methodologies. It is vital to investigate and synchronise these different approaches with the long-term goal to reach a shared evaluation framework. While there have been several studies focusing on Immersive Analytics as a whole or on certain aspects of the domain, this is the first systematic review of the state of evaluation methodology in Immersive Analytics. The main objective of this systematic literature review is to illustrate methodologies and research areas that are still underrepresented in user studies by identifying current practice in user evaluation in the domain of Immersive Analytics in coherence with the PRISMA protocol. (see https://www.acm.org/publications/class-2012)

在沉浸式分析领域,用户评估是系统地生成知识和验证新方法的常用和有用工具。由于该研究领域以用户为中心,因此用户评估具有非同寻常的意义。此外,沉浸式分析是一个跨学科的研究领域,不同的群体都有自己的研究方法。研究和同步这些不同的方法至关重要,其长期目标是建立一个共享的评估框架。虽然已经有一些研究侧重于整个沉浸式分析或该领域的某些方面,但这是对沉浸式分析评估方法现状的首次系统回顾。本系统性文献综述的主要目的是,根据 PRISMA 协议,确定沉浸式分析领域用户评估的当前做法,从而说明用户研究中仍未得到充分代表的方法和研究领域。(见 https://www.acm.org/publications/class-2012)
{"title":"A Systematic Literature Review of User Evaluation in Immersive Analytics","authors":"J. Friedl-Knirsch,&nbsp;F. Pointecker,&nbsp;S. Pfistermüller,&nbsp;C. Stach,&nbsp;C. Anthes,&nbsp;D. Roth","doi":"10.1111/cgf.15111","DOIUrl":"https://doi.org/10.1111/cgf.15111","url":null,"abstract":"<div>\u0000 \u0000 <p>User evaluation is a common and useful tool for systematically generating knowledge and validating novel approaches in the domain of Immersive Analytics. Since this research domain centres around users, user evaluation is of extraordinary relevance. Additionally, Immersive Analytics is an interdisciplinary field of research where different communities bring in their own methodologies. It is vital to investigate and synchronise these different approaches with the long-term goal to reach a shared evaluation framework. While there have been several studies focusing on Immersive Analytics as a whole or on certain aspects of the domain, this is the first systematic review of the state of evaluation methodology in Immersive Analytics. The main objective of this systematic literature review is to illustrate methodologies and research areas that are still underrepresented in user studies by identifying current practice in user evaluation in the domain of Immersive Analytics in coherence with the PRISMA protocol. (see https://www.acm.org/publications/class-2012)</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1